Chief Ray Angle and Field of View

  • Thread starter Frank-95
  • Start date
Hi all!
I was studying the datasheet of a CMOS sensor when I bumped into a graph which defined the CRA (Chief Ray Angle) of a lens as function of the FoV (Field of View).

cdn.sparkfun.com/datasheets/Dev/RaspberryPi/ov5647_full.pdf page 134

I've understood what is the CRA, and I've got the general idea of FoV even if not precisely. Thus I haven't understood yet the precise difference between the two.

https://ocw.mit.edu/courses/mechanical-engineering/2-71-optics-spring-2009/video-lectures/lecture-6-terms-apertures-stops-pupils-and-windows-single-lens-camera/MIT2_71S09_lec06.pdf slide 8

According the the presentation above, their concept of FoV is analogous to what I know CRA is, the maximum angular aperture from the center of the lens. What does the graph mean then? It seems logical to me that the two quantities should be linerly proportional, as at the beginning of the graph; but I cannot understand how can CRA saturate (and diminish too), while FoV increases.

Moreover, what does image height mean in the table below the graph?

Thank you very much indeed
 
Last edited by a moderator:
1,415
9
Thanks for the thread! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post? The more details the better.
 
533
143
I haven't read all of the first reference you linked to, but as it's the datasheet for a sensor and not an optical system, what is it they are showing the FOV of?

I agree with your understanding of the FOV being defined by the chief ray angle, so normally we would expect a linear relation between the CRA and the FOV when the latter is expressed as an angle. But the legend on the plot appears to be the fraction of the full FOV, maybe expressed as a distance. So maybe there's a tangent involved which would make the relationship nonlinear. But why would a sensor have a FOV?
 

Drakkith

Staff Emeritus
Science Advisor
2018 Award
20,360
4,080
I haven't read all of the first reference you linked to, but as it's the datasheet for a sensor and not an optical system, what is it they are showing the FOV of?
The "Key Specifications" sections includes two bullet points for lens size and lens chief ray angle, so I assume it either includes a lens assembly or is designs for a specific one?

I agree with your understanding of the FOV being defined by the chief ray angle, so normally we would expect a linear relation between the CRA and the FOV when the latter is expressed as an angle. But the legend on the plot appears to be the fraction of the full FOV, maybe expressed as a distance. So maybe there's a tangent involved which would make the relationship nonlinear.
Hmm... perhaps they're talking about the CRA after it passes through an optical system? If so, I'm not sure how'd you'd relate the two without know the system details.

Moreover, what does image height mean in the table below the graph?
The image height is just the radial size of image formed at the focal plane. You can think of it as the radius of the full image formed by the optical system. It can also represent the size of the image of a single object. If you're imaging, say, a tree, the height of the tree's image is just the radial size of the image at the focal plane.
 

Andy Resnick

Science Advisor
Education Advisor
Insights Author
7,180
1,477
Yikes- those PDF slides 5-11 are 'yuck'. Let's start from the beginning: when analyzing an optical system with ray tracing, you need at least 2 rays. For convenience, 2 rays commonly used are the Chief ray and Marginal ray. These rays are defined by what they do at the aperture stop: the marginal ray passes through the edge of the aperture stop, while the chief ray passes through the center of the aperture stop.

The entrance and exit pupils are projections of the aperture stop into object and image space, thus the chief and marginal rays behave the same at those pupil planes.

The field stop, on the other hand, is generally unrelated to the aperture stop. Certainly, there are imaging methods where they have specific relationships (Kohler imaging, for example), but in general the field stop is an independent element. It is true that the field stop and also, the size of image plane define the angular field of view.

How's that so far?
 
Last edited:
Sorry for not answering you, but after some days I thought the thread was dead, and forgot about this. My fault.

I haven't read all of the first reference you linked to, but as it's the datasheet for a sensor and not an optical system, what is it they are showing the FOV of?

I agree with your understanding of the FOV being defined by the chief ray angle, so normally we would expect a linear relation between the CRA and the FOV when the latter is expressed as an angle. But the legend on the plot appears to be the fraction of the full FOV, maybe expressed as a distance. So maybe there's a tangent involved which would make the relationship nonlinear. But why would a sensor have a FOV?
It's a CMOS sensor plus a focusing lens in front of it.


Hmm... perhaps they're talking about the CRA after it passes through an optical system? If so, I'm not sure how'd you'd relate the two without know the system details.
The image height is just the radial size of image formed at the focal plane. You can think of it as the radius of the full image formed by the optical system. It can also represent the size of the image of a single object. If you're imaging, say, a tree, the height of the tree's image is just the radial size of the image at the focal plane.
More infos about the lens are here: http://www.truetex.com/raspberrypi

These are my qualitative conclusions: CRA in the datasheet is image (sensor) side, while the FOV is object side. As the CRA increases, the FOV linearly increases with it, but because of lens aberrations and too big rays slope upon the photosites, the FOV appears distorted at the edges, and thus saturates as CRA reaches 30°degrees.
 

Drakkith

Staff Emeritus
Science Advisor
2018 Award
20,360
4,080
More infos about the lens are here: http://www.truetex.com/raspberrypi

These are my qualitative conclusions: CRA in the datasheet is image (sensor) side, while the FOV is object side. As the CRA increases, the FOV linearly increases with it, but because of lens aberrations and too big rays slope upon the photosites, the FOV appears distorted at the edges, and thus saturates as CRA reaches 30°degrees.
After talking with one of the senior's here at my school, I think it is possible for the chief ray angle in image space to begin to decrease as the FOV gets large without it being an aberration. But we're just students, so we can't be sure.
 
He said that it is possible, but did he give any reasons?
 
533
143
After talking with one of the senior's here at my school, I think it is possible for the chief ray angle in image space to begin to decrease as the FOV gets large without it being an aberration. But we're just students, so we can't be sure.
Wouldn't any deviation from the paraxial image location be an aberration by definition?
 

Drakkith

Staff Emeritus
Science Advisor
2018 Award
20,360
4,080
He said that it is possible, but did he give any reasons?
Not really, sorry.

Wouldn't any deviation from the paraxial image location be an aberration by definition?
I think so, but I'm not sure if this results in a deviation from the paraxial image location or not.
 
CRA is the term used primarily for the image sensors that have a microlens array on them (to eliminate the gaps between the physical pixels). Typically, the pitch of the lenslets is changing, so that the lenslets near the edges of the sensor are shifted closer to the center of the sensor (the pitch decreases).
Field of view is the term describing the system containing the imaging lens and the sensor. On a very basic level, it is the whole angle (in the object space) spanned by the imaging array.
If there were no lenslet array, then the edge pixels would define the FOV (just draw the two extreme chief rays - the rays passing through the center of the pupils and hitting centers of the edge pixels - and find the angle between them). With the lenslet array, the situation is changed (but only very slightly)
If the imaging lens were designed exactly for the sensor specifications, then the FOV would be directly connnected to CRA (FOV = 2*CRA). In real life it is not really possible, and the relation is approximate.
 
533
143
I think so, but I'm not sure if this results in a deviation from the paraxial image location or not.
Instead of talking about FOV and CRA, let's talk instead about an object and an image, the aforementioned angles being related to these through the tan function. I believe in paraxial optics that any increase in the object height results in a corresponding increase the image height, through the relationship of the system's magnification. I can't think of a situation in paraxial optics where if you keep increasing the object height, the image height will increase up to a point and then start decreasing. Thus, if the image height does start to decrease, then it is deviating from the expected paraxial result and this would be attributed to aberrations.
 

Drakkith

Staff Emeritus
Science Advisor
2018 Award
20,360
4,080
CRA is the term used primarily for the image sensors that have a microlens array on them (to eliminate the gaps between the physical pixels). Typically, the pitch of the lenslets is changing, so that the lenslets near the edges of the sensor are shifted closer to the center of the sensor (the pitch decreases).
Interesting. I guess that would explain why ray info is included in a sensor data sheet.
 
533
143
Typically, the pitch of the lenslets is changing, so that the lenslets near the edges of the sensor are shifted closer to the center of the sensor (the pitch decreases).
Do you have a reference for this? My understanding is that the lenslets are centered on each receptor in the sensor and those receptors are regularly spaced.
 
CRA is the term used primarily for the image sensors that have a microlens array on them (to eliminate the gaps between the physical pixels). Typically, the pitch of the lenslets is changing, so that the lenslets near the edges of the sensor are shifted closer to the center of the sensor (the pitch decreases).
Field of view is the term describing the system containing the imaging lens and the sensor. On a very basic level, it is the whole angle (in the object space) spanned by the imaging array.
If there were no lenslet array, then the edge pixels would define the FOV (just draw the two extreme chief rays - the rays passing through the center of the pupils and hitting centers of the edge pixels - and find the angle between them). With the lenslet array, the situation is changed (but only very slightly)
If the imaging lens were designed exactly for the sensor specifications, then the FOV would be directly connnected to CRA (FOV = 2*CRA). In real life it is not really possible, and the relation is approximate.
Thank you for your contribution. Anyway I'd like to quote @pixel question.

Do you have a reference for this? My understanding is that the lenslets are centered on each receptor in the sensor and those receptors are regularly spaced.
Also I knew that photosites (I'd rather use this term than pixels), are equally spaced (see page 25 of the datasheet in the first post). Why would they be different at the edges?

p.s. I'm not native english. For pitch of lenslet do you mean their inclination related to the normal?
 
Could you please suggest me any book/article/report/whatsoever that talks about image height-cra-fov relation in a photographic enviroment (when there is a sensor after the lens)? Because I should write a work and I would need "official" references.
Moreover I would kindly ask @Greg123 if he could provide some reference to what he said, I would be very grateful.

Unfortunately my reference professor teaches photonics and has optics knowledge but aimed towards optics communications not photographic optics.

Thank you very much indeed, and sorry but this thing is not-trivial at all :/
 

Drakkith

Staff Emeritus
Science Advisor
2018 Award
20,360
4,080
Could you please suggest me any book/article/report/whatsoever that talks about image height-cra-fov relation in a photographic enviroment (when there is a sensor after the lens)? Because I should write a work and I would need "official" references.
Are you asking about how the chief ray is related to image height at the sensor (aka the image plane) and not in the context of microlens arrays? If so, then literally any decent book on Geometrical Optics should go into this in detail.

It's a little difficult to give you a reference because I have no idea how in-depth you want to go. Plenty of PDF's and slideshows can be found online that talk about the absolute basics, but I have yet to find one that goes into detail and teaches you everything you need to know if you want to be able to do all the calculations and such. If you don't need to learn this, then any of those should do just fine.
 
I'd need them related to sensors, and even better related to lenslets arrays. I understood what a CRA or a FOV is, even though I had to seize infos from several web sites, nevertheless it would be nice to have a nice article or book that explain microlenses arrays, that I can use as reference in my work.

Let's say it's not fundamental for the work, but I'm curious, and it drives me mad the fact that I cannot get to a precise quantitative solution about the saturation (and even the decrease) of CRA as FoV increases. It seems that as the object angle increases, the sensor side angle decreases or stays fixed. If I ever published a scientific work I would write a minimal explanation to what I'd show.
 
533
143
Let's say it's not fundamental for the work, but I'm curious, and it drives me mad the fact that I cannot get to a precise quantitative solution about the saturation (and even the decrease) of CRA as FoV increases.
Maybe try contacting OmniVision and asking them about it.
 

Drakkith

Staff Emeritus
Science Advisor
2018 Award
20,360
4,080
I'd need them related to sensors, and even better related to lenslets arrays.
Usually you can just assume that the sensor is placed at the focal plane of the system and the chief ray gives the image height at the focal plane (and thus the sensor). If you want to get into more details with regard to lenslets and how they affect things then I can't help you, as that is well beyond my knowledge level at the moment. About the best info I could find was from page 8 on this pdf: http://www.onsemi.com/pub/Collateral/NOIP1SN025KA-D.PDF
 
Maybe try contacting OmniVision and asking them about it.
I'm going to try, although I was already ignored once.

Usually you can just assume that the sensor is placed at the focal plane of the system and the chief ray gives the image height at the focal plane (and thus the sensor). If you want to get into more details with regard to lenslets and how they affect things then I can't help you, as that is well beyond my knowledge level at the moment. About the best info I could find was from page 8 on this pdf: http://www.onsemi.com/pub/Collateral/NOIP1SN025KA-D.PDF
Thank you for the link :)
 
Usually you can just assume that the sensor is placed at the focal plane of the system and the chief ray gives the image height at the focal plane (and thus the sensor). If you want to get into more details with regard to lenslets and how they affect things then I can't help you, as that is well beyond my knowledge level at the moment. About the best info I could find was from page 8 on this pdf: http://www.onsemi.com/pub/Collateral/NOIP1SN025KA-D.PDF
Can anyone please confirm if my understanding is correct: If a manufacturer states that it's sensor has 0 CRA, does this mean that the microlenses at the edges/corners are aligned to their respective photodiodes similarly to those in the center? If a sensor is said to have a non-0 CRA, does this mean that the microlenses/corners are shifted off-center with respect to the photodiodes to achieve this non-0 CRA?

Thanks!
 
Yes, that is correct, @Drakkith. The peak responsivity angle is 0° in the center of the image where the microlenses and pixels are perfectly lined up. And it is increasingly non-zero away from the sensor center to accomodate the rate of increasing CRA of an optical system.

The same CMOS image sensor can even come in different versions, with varying degree of microlens-to-pixel shifts to fit different kind of optics.

For some reason, sensor manufacturers tend to obscure the meaning of these graphs by just writing "CRA (degrees)" on the y-axis, instead of saying what it actually is, namely the peak responsivity angle of the microlens-pixel combination.
 
Last edited by a moderator:

Want to reply to this thread?

"Chief Ray Angle and Field of View" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Top Threads

Top