- #1
- 17
- 5
This problem arose in modeling camera focusing movement, such as a control system might do.
It assumes a simple (thin) lens, rays close to the optical axis, and monochromatic light. While most camera lenses are not simple, this is a first approximation.
Camera lenses project an image of a distant object (the subject of the photo) on a screen (the film or digital sensor). When the object is very far away (at "photographic infinity") the rays coming from it are nearly parallel (collimated), then the back distance (from lens to film/sensor) tht gives the sharpest image is equal to the focal length of the lens (by the definition of focal length). But when the object is nearby, the back distance must be increased to bring the projected image into focus.
Clasically, the image will be in sharpest focus when the relationship between the back distance, object distance and focal length is 1/B + 1/D = 1/f. However, real cameras do not focus by moving the back (film or sensor), they focus by moving the lens forward, towards the object. This increases B, but also decreases D by the amount. When D is large, this decrease is insignificant and can be (and usually is) ignored. But in close-up photography, and especially extreme close-up (macro lens) photography, the difference can be significant.
Starting from the lens in its infinity focus position, and calling the focusing distance added (i.e., additional bellows extension) delta, the above formula becomes 1/(f + delta) + 1/(D - delta) = 1/f.
It's easy to devise an algorithm that gives an approximate solution: loop, incrementing delta by a fixed amount until the equation becomes true.
But that's inefficient and doesn't give an exact soltuion. Increasing the precision of the algorithm by using a smaller increment also increases run time..
Given f and D, is there a direct solution for delta? Failing that, is there a more efficient algorithm?
I'm sure this is obvious to somebody here, but not to me. Any help would be greatly appreciated. I realize this is a math problem--the connection with physics is that the Gaussian focus equation -- given in every physics textbook and ever optics textbook ever written -- turns out to be somewhat difficult to apply to real cameras, which focus by moving the lens, not the back. I checked dozens of physics and photography books,, and none that I found discuss this problem. Photography doesn't have refereed journals and photography companies consider their control systems to be trade secrets. Thanks in advance!
It assumes a simple (thin) lens, rays close to the optical axis, and monochromatic light. While most camera lenses are not simple, this is a first approximation.
Camera lenses project an image of a distant object (the subject of the photo) on a screen (the film or digital sensor). When the object is very far away (at "photographic infinity") the rays coming from it are nearly parallel (collimated), then the back distance (from lens to film/sensor) tht gives the sharpest image is equal to the focal length of the lens (by the definition of focal length). But when the object is nearby, the back distance must be increased to bring the projected image into focus.
Clasically, the image will be in sharpest focus when the relationship between the back distance, object distance and focal length is 1/B + 1/D = 1/f. However, real cameras do not focus by moving the back (film or sensor), they focus by moving the lens forward, towards the object. This increases B, but also decreases D by the amount. When D is large, this decrease is insignificant and can be (and usually is) ignored. But in close-up photography, and especially extreme close-up (macro lens) photography, the difference can be significant.
Starting from the lens in its infinity focus position, and calling the focusing distance added (i.e., additional bellows extension) delta, the above formula becomes 1/(f + delta) + 1/(D - delta) = 1/f.
It's easy to devise an algorithm that gives an approximate solution: loop, incrementing delta by a fixed amount until the equation becomes true.
But that's inefficient and doesn't give an exact soltuion. Increasing the precision of the algorithm by using a smaller increment also increases run time..
Given f and D, is there a direct solution for delta? Failing that, is there a more efficient algorithm?
I'm sure this is obvious to somebody here, but not to me. Any help would be greatly appreciated. I realize this is a math problem--the connection with physics is that the Gaussian focus equation -- given in every physics textbook and ever optics textbook ever written -- turns out to be somewhat difficult to apply to real cameras, which focus by moving the lens, not the back. I checked dozens of physics and photography books,, and none that I found discuss this problem. Photography doesn't have refereed journals and photography companies consider their control systems to be trade secrets. Thanks in advance!