Simulate other lenses with Time of Flight (ToF) camera

In summary: You might need a stronger light source if the intensity goes down too much.Sure, you can simulate the optics.
  • #1
aghbar
6
0
Hello,

I have a small project in which I try to use a Time of Flight camera to detect movement in the Field of View.
Now the current Field of View is really narrow (about 25° diagonal FoV), which is we I will have to simulate a bigger one.

My professor did try to explain to me what changes, but to be honest I could not follow his explanation, since I'm a real beginner when it comes to optics. Now that I did some research on the theme I have a lot of questions regarding the theme, but could not find any suitable explanation yet, so here I am :)

So what I do understand is that a Time of Flight camera emits light with a transmitter and has an receiver, which measures the time of flight of the transmitted photons to get distance measurements.
Now to things I am not sure about.

-If I would like to have a bigger FoV. Do I need to use a lens on both the receiver and the transmitter or only the receiver?
-What properties change when I use a different lens in my system(luminous flux, power consumption)?
-Is there a way to calculate how these properties would change?
-and my final question is to the brief introduction that my professor gave me, where I made a picture of a formula which I did not really understand(I appended it as a picture). I know that the formulas are about two different lenses, with different FoV. But what exactly is calculated there? If it is the observed surface area of the different FoVs, shouldn't it be ## \varphi \propto (d * tan(\frac{\alpha}{2})*2)^2 ##, when d is the distance to the object and ## \alpha ## is the Field of View

I know those are a lot of questions. But I hope someone can try to help me understand the theme a bit more.

Thanks in advance
 

Attachments

  • append.png
    append.png
    70.3 KB · Views: 545
Science news on Phys.org
  • #2
What do you mean by a "time of flight" camera? Are you measuring the time for a light pulse to cross a gap (that will be around one foot per nanosecond)?
The optics can spread the pulse out if the distance traveled is not the same over the beam.
Have I got it wrong?
 
  • #3
I'm using the VL53L1X camera of ST.
 
  • #4
To detect something it has to be both in the beam of the light source and in view of the camera. I don't know what your 25 degrees refer to, but you'll have to consider both sides if you want a larger field of view.
aghbar said:
-What properties change when I use a different lens in my system(luminous flux, power consumption)?
The flux for sure. The time will change, you'll have to re-calibrate the device if this is a relevant effect. You might need a stronger light source if the intensity goes down too much.
aghbar said:
-Is there a way to calculate how these properties would change?
Sure, you can simulate the optics.

##\sin \frac \alpha 2 \approx \tan \frac \alpha 2## if the angle is not too large, and for an "is proportional to" relation the factor 2 does not matter.
 
  • #5
aghbar said:
I'm using the VL53L1X camera of ST.
From the spec sheet, it wasn't clear to me what algorithm the camera uses but it must be able to cope with missing returns at low levels.
I guess the optics wouldn't be too critical because the resolution (error is around 20mm) is non-demanding.
aghbar said:
Do I need to use a lens on both the receiver and the transmitter or only the receiver?
Everything is on the chip, afaics so the emerging laser light will pass through the same lens. If you increase the field of view, it will involve both the illumination of the object and also the 'receiver gain'. I reckon that must mean doubling the distance would reduce the overall sensitivity to 1/16 (inverse square law would apply twice so 1/4 X 1/4). That would imply the maximum usable range would be 1/4 with the wide angle mod.
 
  • #6
sophiecentaur said:
I reckon that must mean doubling the distance would reduce the overall sensitivity to 1/16
Where do you see a doubled distance?

Spreading out the signal over 4 times the area but also collecting it over 4 times the area doesn't change the signal strength. It might increase the background, however.
 
  • #7
sophiecentaur said:
Everything is on the chip, afaics so the emerging laser light will pass through the same lens.

From the datasheet it looks like the laser and the sensor have separate apertures. Page 26 has a diagram with EMT and RTN cones, which I assume stand for Emitter and Receiver, though I'm not particularly certain about that. What are your thoughts?
 
  • #8
mfb said:
To detect something it has to be both in the beam of the light source and in view of the camera. I don't know what your 25 degrees refer to, but you'll have to consider both sides if you want a larger field of view.The flux for sure. The time will change, you'll have to re-calibrate the device if this is a relevant effect. You might need a stronger light source if the intensity goes down too much.Sure, you can simulate the optics.

##\sin \frac \alpha 2 \approx \tan \frac \alpha 2## if the angle is not too large, and for an "is proportional to" relation the factor 2 does not matter.

I'm sorry I meant 27° and it is the the listed diagonal Field of View in the datasheet.

I don't know if the light passes through the same lens for transmitter and receiver.

So from what I understand using a wider angle lens would mean, I would definitely have to adjust for that one the receiver and the transmitter.

can you guys recommend a program, with which I can simulate the changes on the light source with different lenses?
 
  • #9
mfb said:
Where do you see a doubled distance?.
I could have put my post better. I was just pointing our the sharp drop off with a bigger range.
mfb said:
Spreading out the signal over 4 times the area but also collecting it over 4 times the area doesn't change the signal strength. It might increase the background, however.
This is true when the object takes up the whole FOV. As the object approaches a point, the ISL applies twice (1/d4). This accounts for the 'black hole' effect for objects in the background of a flash photo.
I didn't understand the operation of the camera fully but I am assuming that the optics would need to blur (low pass filter) the image so that each of the (very few) sensors would see a section of the scene with few gaps or overlaps. Using a wide field conversion would need care if this is to be maintained (i.e not to lose small moving objects in the gaps in between.)

Drakkith said:
From the datasheet it looks like the laser and the sensor have separate apertures. Page 26 has a diagram with EMT and RTN cones, which I assume stand for Emitter and Receiver, though I'm not particularly certain about that. What are your thoughts?
Is that some attempt to compensate for the drop off of sensitivity at the edges? If one field is wider and tailored in the middle in some way, the overall sensitivity over the field could be more uniform. But the OP's figure of 25° and the FOV used in page 15 are significantly different from the 36° and 39° figures on P26.
At 25° off axis, the return could be 40% lower than on axis. (People may argue it's 20%)
 
  • #10
sophiecentaur said:
This is true when the object takes up the whole FOV.
Well, apparently it is larger than the current field of view.
 
  • #11
mfb said:
Well, apparently it is larger than the current field of view.
That unit is being suggested for use in robots to detect obstacles and to detect hand gestures. It's a bit late to avoid an obstacle that takes up the whole field of view. See the last of the three pdfs in the link to the device and also the User Manual. It implies you could spot a sheep in a field. :smile:
 
  • #12
sophiecentaur said:
Is that some attempt to compensate for the drop off of sensitivity at the edges?
I haven't a clue.
 
  • #13
Drakkith said:
I haven't a clue.
One of life's mysteries.
But the OP has quite a few hidden difficulties and the basic answer is that the illumination needs to have a similar angular field or you will waste light energy or have a dark surround to the scene. (Just as for a good camera flash with Zoom, linked to the camera lens). Geometry (as discussed) can tell you the resulting light level thrown back. However, once the angle gets wider than around 10° from the axis, there is significant drop of as the hypotenuse (to the edge of an illuminated plane will increase as 1/cos(Φ). This can be corrected for, of course.
A "program", as such is not needed. All that you need to do is some simple trigonometry on your calculator.
 
  • #14
sophiecentaur said:
Geometry (as discussed) can tell you the resulting light level thrown back. However, once the angle gets wider than around 10° from the axis, there is significant drop of as the hypotenuse (to the edge of an illuminated plane will increase as 1/cos(Φ). This can be corrected for, of course.
A "program", as such is not needed. All that you need to do is some simple trigonometry on your calculator.

I'm sorry, I still don't really understand.
What unit am I working with when calculating the light thrown back?
And what do you mean with the drop of with an specific angle? Is there a law for that?
 
  • #15
aghbar said:
I'm sorry, I still don't really understand.
What unit am I working with when calculating the light thrown back?
And what do you mean with the drop of with an specific angle? Is there a law for that?
I have to wonder what your Professor told you about this and what he actually expected you to know already. The basics of what happens to the light are the same as for pretty much any optical system. The effect of distance when sources are fairly small is just Inverse Square Law.
Details of the "throw back" (reflectivity) are not important for this calculation because the same thing applies for the unmodified system. All you need to know is the distance traveled by the light from the source and use ISL to find the light flux density from the centre relative to the edges. This is simple trigonometry and the Cos of the angle tells you the length of the hypotenuse. I can't imagine that a specific program has been written to do what you want; it such basic Physics and Geometry.
A unit for power flux density could be Wm-2 but the spec will help you there.
 
  • #16
aghbar said:
I'm sorry, I still don't really understand.
What unit am I working with when calculating the light thrown back?
And what do you mean with the drop of with an specific angle? Is there a law for that?

Unfortunately optics is not usually a simple case of "find a law and apply it". Even the simplest optical systems are very complicated, with many different laws, properties, and even units to choose from and work with.

To help us help you, do you know the desired field of view of your sensor? Do you know the desired distance to the objects whose speeds are being measured? When you say you want to 'simulate' a larger FoV, what exactly do you mean by that?
 
  • Like
Likes mfb and sophiecentaur
  • #17
sophiecentaur said:
I have to wonder what your Professor told you about this and what he actually expected you to know already. The basics of what happens to the light are the same as for pretty much any optical system. The effect of distance when sources are fairly small is just Inverse Square Law.
Details of the "throw back" (reflectivity) are not important for this calculation because the same thing applies for the unmodified system. All you need to know is the distance traveled by the light from the source and use ISL to find the light flux density from the centre relative to the edges. This is simple trigonometry and the Cos of the angle tells you the length of the hypotenuse. I can't imagine that a specific program has been written to do what you want; it such basic Physics and Geometry.
A unit for power flux density could be Wm-2 but the spec will help you there.

That helps me a lot. Thanks for that, but I can't find anything to in the spec that gives me the emitted power of the camera.

Drakkith said:
Unfortunately optics is not usually a simple case of "find a law and apply it". Even the simplest optical systems are very complicated, with many different laws, properties, and even units to choose from and work with.
To help us help you, do you know the desired field of view of your sensor? Do you know the desired distance to the objects whose speeds are being measured? When you say you want to 'simulate' a larger FoV, what exactly do you mean by that?

The goal is to see the differences between different lenses in the area of the existing 25° and 90°. More specifically, what kind of drawbacks are to expect by using a wider angle lens. The datasheet says that the camera has a maximum distance of 3m+ with the current lens. But If I start emitting with the same power as before there should be a considerable smaller maximum distance with a wider lens.

And the other question is how would objects scale on the different field of views. How will I have to scale a person (what size should the person have) walking through the narrow FoV to be representative of the wide FoV. Can I just scale them according to the surface area that is spanned by the FoV:
for 27° diagonal FoV and a distance of $$ 2m = 0,46m^2 $$
for 80° diagonal FoV and a distance of $$ 2m = 5,63m^2 $$

so a scale factor of $$ \frac{5,63m^2}{0,46m^2} = 12,23 $$

or will I have to consider other properties.
 
Last edited:
  • #18
@aghbar I see where you're at now. I mentioned a camera flash gun earlier and that's what you need to be thinking in terms of. The TOF camera is a very specific example of a much more general principle.
If you want to get a bit more of an idea about context then you should Google Camera Flash Guide Numbers and find some nice chatty, semi academic articles about the subject. I think this could be great for your confidence in any calculations you do. You could impress your Prof, too!

The calculations at the level you need are no worse than have been mentioned above but, when the object is large and when the TOF camera has an image that covers a significant area of the sensor, there can be a great signal to noise advantage. The limit is when the object takes up the whole FOV and then the energy getting back to the camera is almost independent of what an additional lens does to the FOV. (Mentioned earlier)
 
  • #19
aghbar said:
The goal is to see the differences between different lenses in the area of the existing 25° and 90°. More specifically, what kind of drawbacks are to expect by using a wider angle lens. The datasheet says that the camera has a maximum distance of 3m+ with the current lens. But If I start emitting with the same power as before there should be a considerable smaller maximum distance with a wider lens.

For a given object at a given distance, the reflected power is going to grow smaller as the FoV increases since the light is spread out over a larger area before reflecting back. The object will have to be closer to the camera for the same signal to noise ratio, as Sophie mentioned. Another thing to think of is the change in the resolution of the camera. You're not increasing the number of pixels, so each pixel is going to see a much larger portion of the FoV than it did before. This may or may not be desirable.

aghbar said:
And the other question is how would objects scale on the different field of views. How will I have to scale a person (what size should the person have) walking through the narrow FoV to be representative of the wide FoV. Can I just scale them according to the surface area that is spanned by the FoV:

That looks plausible to me.
 
  • #20
aghbar said:
Thanks for that, but I can't find anything to in the spec that gives me the emitted power of the camera.
That tells you something about how significant it may be. So perhaps the absolute level doesn't matter very much and it's only the relative power that counts. What do you think? The reflectance of the target would be at least as important (black cats in coal mines etc.) and the maximum usable range is mentioned.
 
  • #21
With 90°FoV, at the corners you will get only about 6% 12.5% of the signal strength due to Lamberts Law (COS4 fall-off, see http://dougkerr.net/Pumpkin/articles/Cosine_Fourth_Falloff.pdf, which applies to both the light source and the detector). Of course the distance calibration will vary across the FoV because of the slant distance to the target.

Sounds like you will learn a lot on this project!
 
Last edited:
  • Like
Likes sophiecentaur
  • #22
Thanks for all the answers,
this really makes me a lot more comfortable.

Tom.G said:
With 90°FoV, at the corners you will get only about 6% of the signal strength due to Lamberts Law (COS4 fall-off, see http://dougkerr.net/Pumpkin/articles/Cosine_Fourth_Falloff.pdf, which applies to both the light source and the detector). Of course the distance calibration will vary across the FoV because of the slant distance to the target.

Sounds like you will learn a lot on this project!

How did you calculate those 6% from lamberts law at 90°?
 
  • #23
aghbar said:
How did you calculate those 6% from lamberts law at 90°?
Edit:
Oops! You are right. That 6% should be 12.5%. At 90° FOV the half-angle is 45°, COS(45) = 0.707, 0.7074 = 0.25; times 0.7072 = 0.125. Will correct in my earlier post. Sorry.
End Edit:

I considered the COS4 fall off of the imager, and assumed the illumination fall off to be only COS2 because there is only one surface invovlved. I think this would be valid at least for a steered illumination beam

If there are any Optical gurus reading this, please jump in for any corrections or confirmation.
 
Last edited:

1. What is a Time of Flight (ToF) camera?

A Time of Flight (ToF) camera is a type of camera that measures the time it takes for light to travel from the camera to the subject and back. This allows the camera to create a depth map of the scene, making it possible to simulate different lenses and achieve various effects.

2. How does a ToF camera simulate other lenses?

A ToF camera uses the depth information captured by the camera to digitally simulate the effects of different lenses. By adjusting the depth of field, focal length, and other lens parameters, a ToF camera can create a variety of lens effects, such as bokeh and wide-angle distortion.

3. Can a ToF camera simulate any type of lens?

A ToF camera can simulate most types of lenses, but it may have limitations in simulating specialty lenses such as fisheye or tilt-shift lenses. It is best suited for simulating standard lenses with fixed focal lengths.

4. What are the benefits of using a ToF camera to simulate lenses?

Using a ToF camera to simulate lenses has several benefits. It allows for real-time adjustments and previews of different lens effects, without the need for physical lens changes. It also offers more precise control over the depth of field and allows for post-processing adjustments.

5. Are there any limitations to using a ToF camera to simulate lenses?

While a ToF camera can simulate many different lenses, it may not be able to replicate the exact optical characteristics of each lens. Additionally, the quality of the depth map captured by the ToF camera can also affect the accuracy of the simulated lens effect.

Similar threads

  • Precalculus Mathematics Homework Help
Replies
1
Views
1K
Replies
13
Views
1K
Replies
41
Views
541
  • Introductory Physics Homework Help
Replies
4
Views
1K
Replies
17
Views
2K
Replies
2
Views
1K
Replies
18
Views
2K
  • Other Physics Topics
Replies
21
Views
14K
Replies
1
Views
1K
Back
Top