I Factors that influence depth of field

  • I
  • Thread starter Thread starter Oldhouse
  • Start date Start date
  • Tags Tags
    Dof Photography
AI Thread Summary
The discussion centers on the factors influencing depth of field (DOF) in photography and videography, particularly the claims made in a YouTube video regarding sensor size and image projection. The video asserts that sensor format does not affect DOF, which some participants challenge by emphasizing the importance of the circle of confusion and how image size impacts perceived focus when displayed. Key points include that the size of the entrance pupil, distance to the subject, and viewing conditions all play significant roles in determining DOF. Participants also highlight that while the video may provide a basic understanding, it lacks depth in explaining the physics behind DOF. Overall, the conversation seeks clarity on the accurate factors that influence DOF in various contexts.
Oldhouse
Messages
31
Reaction score
4
TL;DR Summary
What are the actual factors that influence depth of field in photography?
I came across a youtube video that discusses some aspects of depth of field in photography/videography. Some things that have been said collided with my understanding of DOF. I then had a long discussion with the creator of the video that unfortunately lead nowhere. I'm no scientist, so I lack a proper understanding of the subject matter. If what is said in the video is correct, I would like to know, so I can learn something from it and correct my misconceptions.
The segment in question starts at 8mins 28s and ends at 9mins 10s (just over 30s in total)



He says "the format size, whether it is full frame, super35, any of that, does not change depth of field", he also shows a lens projecting an image on a wall and then says "it doesn't matter how big the wall is, it hasn't changed the depth of field" because "the image has already been formed, as it goes through the lens, before it hits the sensor" (wall in this case).

Are those statements actually accurate?

My understanding of DOF is that you have to take into consideration the properties of the picture as it arrives on our retina. In order for us to consider a part of the image "in focus", the circle of confusion has to be under a certain threshold. Many factors have an influence on the circle of confusion: When recording the image, the main factors are size of entrance pupil of the lens and distance to subject but also the size of the recording format because it influences how much we have to enlarge the image when viewing it on our TV. When we view the picture on a TV, we have to take the distance from the viewer to the TV into account, the size of the TV etc.

We can't really say anything regarding to the DOF by looking at a picture projected to a wall without taking the other factors into acount. The DOF we get in a picture is different if we record it on a small piece of film or on a large piece of film because we need to enlarge the small piece of film more than we enlarge the large piece of film to display it on our TV. Therefore the statement "it doesn't matter how big the wall is, it hasn't changed the depth of field" because "the image has already been formed, as it goes through the lens, before it hits the sensor" seams to be nonsensical.

Am I actually miss-understanding something here?
 
Last edited by a moderator:
Science news on Phys.org
Oldhouse said:
My understanding of DOF is that you have to take into consideration the properties of the picture as it arrives on our retina.
Maybe for binoculars etc. But for video and photography it's about how it arrives on the film / sensor.
https://en.wikipedia.org/wiki/Depth_of_field

Oldhouse said:
When we view the picture on a TV, we have to take the distance from the viewer to the TV into account, the size of the TV etc.
When you print out a photo, is the depth of field affected by the size of the paper? Nobody uses the term in the way you seem to understand it.
 
Oldhouse said:
TL;DR Summary: What are the actual factors that influence depth of field in photography?

I came across a youtube video that discusses some aspects of depth of field in photography/videography.
Have you looked at any other videos on the topic? There will be a lot of dross around obn this topic but I think the problem in the video is that he's told you something with no real explanation, although it seems largely ok to me. His comments about sensor format and a wall are correct because the focus only depends on what's happening local to the part of the picture of interest - if that's the way he defines focus and it's what autofocus works on.

Every step in the picture chain will introduce impairments. That doesn't negate the value of assigning quality parameters at the source system. Any serious broadcaster will take into account the later impairments of the broadcast chain. For instance the quality of studio set design and props can be really poor for 405 line TV; It's easy to spot old material in amongst modern studio shots.

Oldhouse said:
TL;DR Summary: What are the actual factors that influence depth of field in photography?

My understanding of DOF is that you have to take into consideration the properties of the picture as it arrives on our retina.
You'd need to give a source for that. In practice, you could say that you need to take into consideration the viewing conditions and display type when deciding whether source picture quality is good enough but that's a massive modification to what I would think of as being Depth of Field of a camera.



A.T. said:
Maybe for binoculars etc.
Agreed; your eye can compensate for a lot of bad focus. The plane of the paper / sensor is fixed so the depth of field is easier to define. (you know when it is or isn't in focus)
 
Oldhouse said:
He says "the format size, whether it is full frame, super35, any of that, does not change depth of field", he also shows a lens projecting an image on a wall and then says "it doesn't matter how big the wall is, it hasn't changed the depth of field" because "the image has already been formed, as it goes through the lens, before it hits the sensor" (wall in this case).

Are those statements actually accurate?
That's correct. The size of the sensor/film is irrelevant.

Oldhouse said:
When recording the image, the main factors are size of entrance pupil of the lens and distance to subject but also the size of the recording format because it influences how much we have to enlarge the image when viewing it on our TV. When we view the picture on a TV, we have to take the distance from the viewer to the TV into account, the size of the TV etc.
No, the enlargement of an image has nothing to do with depth of field, nor does the distance between the viewer and the tv.

See the following image:

aperture-depth-of-field.png


Notice the two colored vertical lines overlapping the sensor. They represent the spread of light rays of an out of focus object when they hit the sensor. The pink line, being larger than the green line, has more blur, and thus the green light has less blur. A camera using the green aperture would have a larger depth of field than a camera using the larger aperture because the objects out of focus have a smaller blur. That is, there is some amount of 'acceptable blurriness' before it actually becomes noticeable to the eye, and the smaller aperture increases this range by simply reducing the blurriness of unfocused objects (everything not near the point/distance that the camera is focused on).

And that's pretty much all depth of field is. Generally, stopping down the aperture (going up in f-numbers) increases your depth of field by reducing the size of the aperture, and vice versa.
 
Last edited:
  • Informative
Likes berkeman
Oldhouse said:
TL;DR Summary: What are the actual factors that influence depth of field in photography?
The video is fine, I only partially disagree with one thing presented- the role of image size. It's important to realize that part of lens design includes the image circle size, something not always appreciated. Often, that subtlety gets folded into the concept of 'crop factor' or "35mm equivalent focal length". For example, large-format lenses have much larger image circles than 35mm format lenses, which in turn have much larger image circles than smartphone camera lenses.

Depth of field does indeed depend on the concept of 'circle of confusion' and is inherently a subjective statement, although it can be more-or-less standardized and corresponds to the markings on a lens barrel. Here is where image format kinda-sorta comes in: the size of the circle of confusion does depend on the size of the image circle, due to, as you note, secondary magnification of the image.

Some 'standard' numbers: the CoC for 35-mm format image circles is 30 microns, while large-format lenses have a CoC closer to 200 microns. Don't know what it is for a smartphone.... but, if I could use a large format lens on a smartphone sensor, the image would look totally blurry. Trying to use a smartphone lens on 8x10 film would result in most of the film not being exposed. If I printed a "perfectly sharp" 35mm format image to fit on a billboard, from a great distance the image would look fine.

A good presentation of DoF formulas based on ray optics is here:

https://www.dofmaster.com/equations.html
https://www.dofmaster.com/dofjs.html

Hope this helps....
 
Drakkith said:
That's correct. The size of the sensor/film is irrelevant.


No, the enlargement of an image has nothing to do with depth of field, nor does the distance between the viewer and the tv.
I think that is exactly where most people go wrong. Please take a look at the two attached screenshots of a DOF calculator. The only parameter that changed is the sensor size. This results in a change of DOF as you can see. So why is that? Simple... The picture recorded with a micro 4/3 sensor needs to enlarged twice as much as the picture recorded with the full frame sensor. Therefore, the permissible circle of confusion for the 4/3 sensor is half the size as the one for the full frame sensor.
Any comments?
Screenshot 2024-04-10 at 17.25.03.png
Screenshot 2024-04-10 at 17.25.18.png
 
A.T. said:
When you print out a photo, is the depth of field affected by the size of the paper? Nobody uses the term in the way you seem to understand it.
Yes it is. This becomes important when you take pictures for billboards etc. Resolution, display/paper size, distance between viewer to the screen etc. all play a role.

A very simple explanation of this is as follows: Try to manually focus a picture on a small camera screen, it will be very difficult if not impossible. It will look in focus on your camera screen, but the second you look at it on your large TV screen you will notice that it is out of focus.

Another way to illustrate it is the following: Take a slightly out of focus picture, then scale it down to 1/10 of its original size. You will notice that it now looks perfectly in focus. Display size and resolution matter. BTW, here is something that might sound crazy at first, but it is actually a real thing: If you take a picture, crop it, then scale it back to its original size... the DOF changes.

Here are links with some more info on the topic:
https://damienfournier.co/dof-and-circle-of-confusion/

https://photo.stackexchange.com/que...-of-confusion-of-the-human-eye-to-directly-ob
 
Last edited by a moderator:
sophiecentaur said:
You'd need to give a source for that. In practice, you could say that you need to take into consideration the viewing conditions and display type when deciding whether source picture quality is good enough but that's a massive modification to what I would think of as being Depth of Field of a camera.
Not at all... This is actually not even contested. Of course viewing size, distance to screen etc. influences DOF. Here is some info on the topic... jump to the last part for the conclusion: https://damienfournier.co/dof-and-circle-of-confusion/
 
Oldhouse said:
Of course viewing size, distance to screen etc. influences DOF
No; they affect the perceived depth of focus. That video seems to deny that but, in the same way, he puts very few actual numbers to his statements about lens behaviour. As advice to beginners, the video is fine and could help people to select how best to spend their fortune but I don;t think we can learn any Physics from what he says.

I'm afraid you need to define depth of field formally before you can make statements about it. Take two objects (pairs of points) at different distances. How would you decide that one is 'in focus' and one isn't?. It's no good just eyeballing the images on our ideal wall / screen. There is no camera system that doesn't do something to tinker with apparent sharpness so how does your wall compare with your top of the range intelligent Nikon camera? The camera will do better for the viewer.

There is one absolute (but arbitrary) way and that could involve the Raleigh Criterion. When one pair of points is resolved by the Rayleigh Criterion, that's the best you can do. (Not really but just say it is). Take the other pair of points and move them away (keeping the same angular separation) from the first object distance until they are not resolved. How imperfect would that have to be? Perfect focus is where the dip in the resulting image is at half value so where would you choose the most shallow dip should be for acceptable focus?

So you can't just say it's all down to the lens without using some actual criterion. We can all muck about with our Raw images in Photoshop and make them look sharper but then, we have to deal with a dark area of the image and we see noise suddenly creep in - just when we thought we'd cracked it.
 
  • #10
Drakkith said:
And that's pretty much all depth of field is.
OK as a comparative measure - describing the visible effect rather than making it formal. There's a lot of talk about the importance of pixel count but when the final image is printed / displayed, it's at least as much to do with the image processing on the way through to the 2K, 4K ,super duper resolution of the display. AI can do a brilliant job to convince you that the £nk+ you spent was really worth while and the basic depth of field is swallowed up in the rest of the channel.
 
  • #11
sophiecentaur said:
Take two objects (pairs of points) at different distances. How would you decide that one is 'in focus' and one isn't?
That's actually fairly easy to do, if your objects are individual pinholes. If you can see any spatial extent to either of the imaged pinholes, you would say that pinhole is not in focus. That's actually how the 30 micron CoC metric was determined, I believe by Polaroid in the 1960s.
 
  • Like
Likes sophiecentaur
  • #12
Andy Resnick said:
That's actually fairly easy to do, if your objects are individual pinholes. If you can see any spatial extent to either of the imaged pinholes, you would say that pinhole is not in focus. That's actually how the 30 micron CoC metric was determined, I believe by Polaroid in the 1960s.
You can 'buy a model star' for checking the colimation of a telescope. My point was that there doesn't seem to be a simple way to measure the DOP of a lens and but there seems little point, bearing in mind all the other links in the video chain. I always used to be haooy with the little marks against the focus ring of my old 35mm cameras which showed you what to expect of the focus scale. Shaking was always a more important factor for me lol.

I think this thread has put things in some sort of realistic order.
 
  • #13
sophiecentaur said:
No; they affect the perceived depth of focus. That video seems to deny that but, in the same way, he puts very few actual numbers to his statements about lens behaviour. As advice to beginners, the video is fine and could help people to select how best to spend their fortune but I don;t think we can learn any Physics from what he says.

I'm afraid you need to define depth of field formally before you can make statements about it.
In that case, what is the difference between DOF and "preceived depth of focus"?
To my understanding, there is no difference. The depth of field is actually the perceived depth of focus. This is why viewing size, distance to screen etc. influences DOF. and is explained in greater detail here: https://damienfournier.co/dof-and-circle-of-confusion/ . I don't even think that there is any point of contention that the mentioned factors influence DOF. Reference to it can also be found it the wikipedia article here: https://en.wikipedia.org/wiki/Circle_of_confusion

As Andy Resnick pointed out, there is actually a way to determine what is in focus or not. To my understanding, it also involves taking the resolution of the eye into account, which then informs how large the CoC can be so we perceive it as a "single point". The closer you are to the image/TV or the larger the image/TV is, the smaller the CoC needs to be for us to still perceive a "single point" as a single point. Of course this is not an absolute because not every eye is the same etc. but there are specific numbers we settled on for the permissible CoC under specific viewing conditions. The recommended permissible CoC therefore differs when shooting a movie for TV vs for the Cinema.
 
  • #14
Andy Resnick said:
That's actually fairly easy to do, if your objects are individual pinholes. If you can see any spatial extent to either of the imaged pinholes, you would say that pinhole is not in focus. That's actually how the 30 micron CoC metric was determined, I believe by Polaroid in the 1960s.
Yes, and I believe the CoC was calculated for the following viewing conditions: 8"by10" photograph viewed at 14in with perfect vision.
 
  • #15
Oldhouse said:
I think that is exactly where most people go wrong. Please take a look at the two attached screenshots of a DOF calculator. The only parameter that changed is the sensor size. This results in a change of DOF as you can see. So why is that? Simple... The picture recorded with a micro 4/3 sensor needs to enlarged twice as much as the picture recorded with the full frame sensor. Therefore, the permissible circle of confusion for the 4/3 sensor is half the size as the one for the full frame sensor.
Any comments?
After looking into this more, the issue boils down to how DOF is defined:

##DOF \approx \frac{2u^2Nc}{f^2}##

Here ##u## is distance to subject, ##N## is the f-number of the system, ##c## is the acceptable circle of confusion, and ##f## is the focal length of the system. The value ##c##, the maximum acceptable circle of confusion, is what is important to understand here. It turns out that the circle of confusion DOES change as sensor size changes. Per wikipedia:

Image sensor size affects DOF in counterintuitive ways. Because the circle of confusion is directly tied to the sensor size, decreasing the size of the sensor while holding focal length and aperture constant will decrease the depth of field (by the crop factor). The resulting image however will have a different field of view. If the focal length is altered to maintain the field of view, the change in focal length will counter the decrease of DOF from the smaller sensor and increase the depth of field (also by the crop factor).

I could be mistaken, but my understanding is this: Take an image. To define the maximum acceptable circle of confusion we need to say how big the image is. If the image is very small, such as when displayed on a small screen or a small piece of paper, then the blurring will have been 'shrunk down' very small when compared to displaying the image on a much larger screen or piece of paper. Imagine an image projected onto the side of a building. If you stand very close then no part of the image will be acceptably sharp and you'll need to move waaaaay back make it look good.

So there is an assumption in that calculator that the two images are being displayed at the same size and being viewed at the same distance. This would mean that the cropped version is having its image magnified by 2x, which increases the diameter of the blur by 2x, which reduces the depth of field since parts of the image that used to be acceptably sharp are now not.

So yes, sensor size is part of what determines the DOF of a system, but in a roundabout, counterintuitive way.
 
  • #16
Drakkith said:
After looking into this more, the issue boils down to how DOF is defined:

.....
Ok, thank you. I already thought I'm going crazy here :smile:
 
  • #17
Oldhouse said:
Ok, thank you. I already thought I'm going crazy here :smile:
Indeed. I wouldn't have thought that the final image size had anything to do with how DOF is calculated, but that's because I was only thinking about the physics of the optical system, not how the image is actually displayed.
 
  • #18
Drakkith said:
So yes, sensor size is part of what determines the DOF of a system, but in a roundabout, counterintuitive way.
For a little more clarity:

Assume two cameras take an image of the same scene with the same optical system at the same distance, the only difference being that the second camera has a sensor that is half the size of the first. There is an assumption that the final images will both be displayed at the same size, even though the second camera took an image that is only half the size of the first.

That assumption is the crux of the issue. If both images were displayed on the same monitor but at their native resolutions then the DOF of both would be the same. But since there's an assumption that the second image will be magnified to be displayed at the same size as the first then we must admit that this will reduce the DOF since it involves magnifying the image and all the blurring inherent in that image.
 
  • #19
I have put off contributing so far, but @Drakkith your post 15 summarizes what I recall from making pictures (and reading photography books) 25 or 30 years ago. Format (today called sensor size) and print size (screen size) are both essential to understanding DOF.
 
  • #20
Drakkith said:
Indeed. I wouldn't have thought that the final image size had anything to do with how DOF is calculated, but that's because I was only thinking about the physics of the optical system, not how the image is actually displayed.
Yes, I also apparently misunderstood DoF as based only the parameters that you actually can control when shooting. Especially in the digital age you have very little control on how the footage will be finally viewed: It could be a flat TV way too big for the room, or a tiny mobile phone.
 
Last edited:
  • Like
Likes Drakkith
  • #21
To go back to my original question... What is said in the video between 8mins 28s and 9mins 10s is simply wrong, right? Or am I somehow miss-understanding the video?
 
  • #22
Drakkith said:
After looking into this more, the issue boils down to how DOF is defined:
Exactly. I already made the point that a less 'fussy' channel and display will be less fussy about how accurately you will need to focus your expensive objective lens. If it is claimed that the DOF refers to the overall DOF then you get different DOFs for everyone who is looking (on their own systems) will get a different answer. Is that a useful metric?

DOF is a nicely behaved quantity, for a half decent lens, when you define it in terms of an ideal screen / infinite sensor element density. You can predict it pretty well, just on the geometry. I don't remember old film cameras having different DOF markings for different grades of film. What has changed?
 
  • #23
Oldhouse said:
To go back to my original question... What is said in the video between 8mins 28s and 9mins 10s is simply wrong, right? Or am I somehow miss-understanding the video?
Unless there is actual disagreement about the physics, it's just a pointless argument about definitions.
 
  • #24
A.T. said:
Unless there is actual disagreement about the physics, it's just a pointless argument about definitions.
I don't see it as pointless at all. If you make up your own definitions (deviating from the established ones), anything you say could be construed as true... whether it is or it isn't. You couldn't agree on anything... I could simply say the sky is green and would be right (if I make up my own definition that green light has a weave length of 460 nm). The definition of what DOF is clearly established. Therefore, unless I completely miss-understand what is said in the video, the explanations on what effects DOF are completely wrong.
 
  • #25
Oldhouse said:
I could simply say the sky is green and would be right (if I make up my own definition that green light has a weave length of 460 nm).
As long as you agree on the measurable wave length spectrum the sky has, there is no disagreement about physics, just about the word "green".
 
  • #26
sophiecentaur said:
If it is claimed that the DOF refers to the overall DOF then you get different DOFs for everyone who is looking (on their own systems) will get a different answer. Is that a useful metric?
Not sure what you mean by that. There is nothing other than the "DOF for everyone who is looking at it on their own systems". There is no such thing as "overall DOF" or any sort of other DOF without taking into account the viewing conditions. In order to make any statement about DOF, you need to define a permissible coc. In order to define a permissible coc, you need to decide on a viewing distance, magnification etc.
We established values for the permissible coc for different recording/viewing conditions. If you choose the right value that match with your circumstances, you get a useful metric.
 
  • #27
A.T. said:
As long as you agree on the measurable wave length spectrum the sky has, there is no disagreement about physics, just about the word "green".
We are going in circles here.... My original question is precisely that: Are the statements true if we use the established definition of DOF? If you want to say that the statements are true but only if you use a "made up" definition of the word DOF that doesn't adhere to the established definition, that is fine... although pointless in my opinion.
 
  • #28
Oldhouse said:
Not sure what you mean by that. There is nothing other than the "DOF for everyone who is looking at it on their own systems". There is no such thing as "overall DOF" or any sort of other DOF without taking into account the viewing conditions. In order to make any statement about DOF, you need to define a permissible coc. In order to define a permissible coc, you need to decide on a viewing distance, magnification etc.
We established values for the permissible coc for different recording/viewing conditions. If you choose the right value that match with your circumstances, you get a useful metric.
Have you really thought this through? Once you have established the geometry of the initial image formation there's nothing you can do to change the focus (ignoring AI etc.) A 'photographer' will produce a picture that has an impact, often dependent on differential focus (Sharp object in the background with blurry foreground for instance). If the original image has everything in sharp focus then, without Photoshopping and blur control the original effect of creative focus can't be achieved. He will want some control of this differential focus. Adding degradations will completely lose the wanted focal effects. If a viewer sees a degraded picture then how is their perception of apparent depth of any value at all?
A.T. said:
Unless there is actual disagreement about the physics, it's just a pointless argument about definitions.
There is the third aspect of the engineering considerations and practicality. It's not pointless to decide on which definition to use, based on practicality. Next time you watch TV, try to spot where differential focus is used to great effect. Watching feature films on a Smart Phone loses a lot of the impact. IMO, discussing that is not pointless.

At least basing the definition of DOF on front end geometry and optics provides a repeatable measure. That could be the safest choice.
 
  • #29
sophiecentaur said:
Have you really thought this through?
Yes, I have thought this through. I actually have a degree in fimproduction with an emphasis in cinematography and also worked professionally as a cinematographher, so I'm quite familiar how DOF is used creatively.
What DOF is, is clearly defined/established. There are not alternative definitions of it as you suggest. Also, I think you still haven't properly understood what DOF is, because it is not possible to base the definition of DOF on the frontend geometry and optics as you suggest. Drakkith even pointed that out in a previous post when he realized that himself.
 
  • #30
Oldhouse said:
Yes, I have thought this through. I actually have a degree in fimproduction with an emphasis in cinematography and also worked professionally as a cinematographher, so I'm quite familiar how DOF is used creatively.
What DOF is, is clearly defined/established. There are not alternative definitions of it as you suggest. Also, I think you still haven't properly understood what DOF is, because it is not possible to base the definition of DOF on the frontend geometry and optics as you suggest. Drakkith even pointed that out in a previous post when he realized that himself.
OK. It's good to hear from someone with actual experience. Perhaps you could give me an example of how that definition (or any definition) is actually implemented (objectively) in practice, say when setting up shots of a scene with two different focus settings, moving from one to another pat of the scene with the camera static. What numbers are used? How is 'focus' checked and also the amount of de-focus in another part of the scene. Does the cinematographer actually measure it or does he/she look at the ranges involved? There will be studio monitors which the director can look at and make a decision at the time, using the shots. But how are the choices of apertures made, other than from past experience? What 'calculations' are done?
At a more general level, I do know that TV vision suites use a monitor which shows the resulting output from the studio. This has to involve the sum of artefacts from scene to broadcast signal. I know that, in the days when (analogue) Colour studios were required to produce a Compatible monochrome output, there was also a mono monitor. In sports programmes, the resolution in the compatible mono picture was what determined the field of the colour cameras. When they finally stopped using 405 lines, there were more players in shot. And so on to 4K.

"What DOF is, is clearly defined/established". Could you be more detailed here and tell me an example with definition and numbers - say when assessing when two objects are both sufficiently well focussed AND when the differential focus between two objects is sufficient. As a creative, I would imagine you would have been through that process. Are you old enough to remember using tape measures for film? There must have been many rules of thumb.

Having looked into astrophotography, i am aware that ther Airey Disc is used when deciding on resolution between to point objects - but the situation is limited to the objective lens and the sensor array and we don't go for de-focus very often. Everything is at infinity, too.
 
  • #31
sophiecentaur said:
OK. It's good to hear from someone with actual experience. Perhaps you could give me an example of how that definition (or any definition) is actually implemented (objectively) in practice, say when setting up shots of a scene with two different focus settings, moving from one to another pat of the scene with the camera static. What numbers are used? How is 'focus' checked and also the amount of de-focus in another part of the scene. Does the cinematographer actually measure it or does he/she look at the ranges involved? There will be studio monitors which the director can look at and make a decision at the time, using the shots. But how are the choices of apertures made, other than from past experience? What 'calculations' are done?
At a more general level, I do know that TV vision suites use a monitor which shows the resulting output from the studio. This has to involve the sum of artefacts from scene to broadcast signal. I know that, in the days when (analogue) Colour studios were required to produce a Compatible monochrome output, there was also a mono monitor. In sports programmes, the resolution in the compatible mono picture was what determined the field of the colour cameras. When they finally stopped using 405 lines, there were more players in shot. And so on to 4K.

"What DOF is, is clearly defined/established". Could you be more detailed here and tell me an example with definition and numbers - say when assessing when two objects are both sufficiently well focussed AND when the differential focus between two objects is sufficient. As a creative, I would imagine you would have been through that process. Are you old enough to remember using tape measures for film? There must have been many rules of thumb.

Having looked into astrophotography, i am aware that ther Airey Disc is used when deciding on resolution between to point objects - but the situation is limited to the objective lens and the sensor array and we don't go for de-focus very often. Everything is at infinity, too.
Generally speaking, the following applies to narrative filmmaking: You usually light a scene for a specific f-stop (whole scene will be shot at that same f-stop). The director of photography might decide that he wants to shoot the scene at f/4 and instructs his crew about the light levels for individual areas of the scene (example, key light on actor at f/4, fill light 2 stops under, BG 3 stops under etc.). The f-stop chosen is usually a creative choice based on experience from previous shoots. The director is often also involved in this choice because it can have a major influence on the overall look of the film. Calculations on DOF aren't really done at this stage. This is just all from experience and looking at reference images with the director.
While filming, focus is adjusted by the 1st AC. During rehearsals, he will use a tape measure or laser range finder to measure the distance from the film plane to the actor/important objects. Usually, he will put marks on the floor where actors are standing etc. He will also mark the follow focus or lens scale directly so he can quickly change focus from one mark to another. As the scene plays out, he will manually focus the lens, using the distance marks on the floor and on the follow focus/lens to keep the object/actor of interest in focus. This takes a lot of practice to do well. It becomes increasingly difficult the more movement you have in the scene (actor might move, as well as the camera). It is actually pretty rare that you calculate your DOF.
A typical instance would be if you are doing a car scene with somebody in the front seat and somebody in the back seat and the director wants both actors in focus at the same time. In the early days, you would consult DOF tables to figure out the DOF you have available. These days you use an app like PCam. For the car scene example, you might decide to put the focus point somewhere in between the two actors (can be risky but might be the only option depending on the available DOF). BTW, those DOF tables are written with a specific CoC in mind, and if you have an app like PCam you can even adjust the CoC used for the calculation (for each format, there is a recommended size for the CoC that has been calculated so it gives you a somewhat accurate idea of how much DOF you will have if the film will be screened on an "average sized cinema screen". For example, the established permissible coc for shooting on s35 is 21.08µ. Some are currently debating if this has to be changed because we have sharper lenses available and higher resolution, better projections standards etc.). If you are shooting for iMax or some other "out of the ordinary screen" you might have to adjust the permissible CoC for more accurate results. Another instance where you might check how much DOF you have available, is in low light conditions, especially if there is a lot of movement which makes the job hard for the 1st AC. If you know that you only have a few inches of DOF, you might put some extra focus marks down or in some extreme cases (when shooting extreme close-ups), the actor might have to be informed that he has to keep his movements to a minimum (this happens quite often on low budget shots that don't have large lighting packages and run out of daylight). BTW, the 1st AC usually doesn't check the focus on a screen (the screen on the camera is often way too small to do that and in some cases there isn't even a screen available). When shooting film (actual film), the only person that can actually judge if the "focus was on" is the operator that looks through the camera. The video tap (beamsplitter with a low resolution videocamera in the optical path of the viewfinder) only gives you a reference picture for framing. With digital cameras, you could pull focus of a production monitor but it usually isn't done that way. In order to predict the actors movement better, it is easier to just stand next to the camera and pull focus based on the distances you marked during rehearsals (if you pull focus of a monitor, chances are that you are always one step behind... if you see the image going out of focus it is already too late). Hope that all makes sense.

As mentioned, the cases where you actually calculate your available DOF are relatively rare. Most movies are shot between f/2.8 and f/5.6 (s35 format). In most regular shooting conditions, you have plenty of DOF to keep your object of interest in focus. When it comes to some more specialty fields like filming miniatures or filming stuff for visual effects, DOF calculations become very important (Can't really talk much about those fields as my knowledge in them is mainly theoretical).

Sorry for the super long post...
 
  • Like
Likes sophiecentaur
  • #32
Oldhouse said:
We are going in circles here.... My original question is precisely that: Are the statements true if we use the established definition of DOF?
Based on my limited knowledge and what we've posted here in-thread, no, the statements saying that DOF only depends on the properties of the optical system and not the size of the sensor are not correct in general, as it ignores how the image will end up being displayed.
 
  • #33
Drakkith said:
Based on my limited knowledge and what we've posted here in-thread, no, the statements saying that DOF only depends on the properties of the optical system and not the size of the sensor are not correct in general, as it ignores how the image will end up being displayed.
Thanks. I know it might sound as if I just wanted somebody to confirm that I'm right for the shake of being right. This is not the case; after having a long discussion with the guy in the video, we basically agreed on all the technical aspects, but he was still insisting that everything that is said in the video is correct. This genuinely confused/irritated me as I couldn't figure out how he could come to that conclusion after we agreed on all the technicalities. I then started to doubt myself... maybe I'm just not properly understanding/interpreting what is actually being said in the video (English is not my native tongue). It just bothered me... BTW, I also posted the same question on https://photo.stackexchange.com/ and it looks like somebody was able to pinpoint the issue at hand. His explanation makes a lot of sense and adds some additional context.

Here is the answer from Steven Kersting:

Your understanding is correct; the author is confusing depth of focus with depth of field... it's a fairly common confusion as the two are interrelated, but only one is common terminology.

The depth of focus is the relative sharpness of details at the image plane, and depth of focus is a fixed characteristic of an image. It is the non-variable component of depth of field; and it is dictated only by the physical size of the aperture opening/entrance pupil (not the f/#).

Depth of field is not a fixed aspect of an image; so when people talk about depth of field as a fixed aspect, they are really talking about the depth of focus.

Depth of field is dictated only by magnification... it is how apparent the depth of focus is made to the viewer. Magnification includes all of the other variables... focal length, subject distance, sensor area/cropping/enlargement, viewing distance, and even the viewer's visual acuity.

If the final/total magnification causes the image to have the same composition and relative size to the viewer, then the depth of field will remain the same and be dependent only on the depth of focus recorded. That's why the standard for image sharpness (and the calculators), assumes a standard viewing distance approximately equal to the image diagonal; and to where the image occupies the human's ~ 45˚ primary field of view.

source: https://photo.stackexchange.com/que...-dof-and-the-factors-that-influence-it#134817
 
  • Like
Likes Drakkith
  • #34
Oldhouse said:
Are the statements true if we use the established definition of DOF?
The definition of DoF relies on the definition of the CoC which is rather fuzzy and application dependent:
https://en.wikipedia.org/wiki/Circle_of_confusion
wikipedia said:
The CoC limit can be specified on a final image (e.g. a print) or on the original image (on film or image sensor).
If you define the CoC based on the original image (on film/sensor), then it is independent of the display parameters (print/screen size and viewing distance). So I would suggest to clarify how the author of the video defines the CoC.
 
  • Like
Likes Drakkith and Oldhouse
  • #35
A.T. said:
If you define the CoC based on the original image (on film/sensor), then it is independent of the display parameters (print/screen size and viewing distance).
Ah, I missed this when I read that wikipedia article. Defining the CoC in reference to a spot size on the sensor itself is more in line with what we deal with in astrophotography.
 
  • #36
Oldhouse said:
Your understanding is correct; the author is confusing depth of focus with depth of field... it's a fairly common confusion as the two are interrelated, but only one is common terminology.

The depth of focus is the relative sharpness of details at the image plane, and depth of focus is a fixed characteristic of an image. It is the non-variable component of depth of field; and it is dictated only by the physical size of the aperture opening/entrance pupil (not the f/#).

Depth of field is not a fixed aspect of an image; so when people talk about depth of field as a fixed aspect, they are really talking about the depth of focus.

Depth of field is dictated only by magnification... it is how apparent the depth of focus is made to the viewer. Magnification includes all of the other variables... focal length, subject distance, sensor area/cropping/enlargement, viewing distance, and even the viewer's visual acuity.
Be careful... that's only partially correct.

"Depth of focus" refers to the allowed mechanical tolerance of placing the sensor at the plane of best focus. Therefore, it is independent of the sensor.

"Depth of field" refers to the range of object distances imaged 'in focus' at the image plane, and depends on all the stuff we have been discussing here.

Unfortunately, many people use the terms interchangeably.
 
  • Informative
Likes berkeman
  • #37
A.T. said:
The definition of DoF relies on the definition of the CoC which is rather fuzzy and application dependent:
https://en.wikipedia.org/wiki/Circle_of_confusion

If you define the CoC based on the original image (on film/sensor), then it is independent of the display parameters (print/screen size and viewing distance). So I would suggest to clarify how the author of the video defines the CoC.
Yes, of course it doesn't matter if you define the CoC at the sensor or on the final printed image. However, when defining the CoC, you have to take the final viewing conditions into account (no matter if you define the permissible CoC on the sensor or in the final print), else you end up with a completely useless measure. The CoC (no matter if defined on the sensor or the printed image) is usually chosen so when the picture is viewed, a "singular point" is still perceived as a "singular point" and not as a blurry point.
 
  • #38
Oldhouse said:
However, when defining the CoC, you have to take the final viewing conditions into account (...), else you end up with a completely useless measure.
I can see some uses for it:

If the final viewing conditions are unknown, one still might want to compare different optics in terms of DoF, for example based on a CoC defined as a fraction of the original image size.

If the footage is used for automated digital image processing that reads the sensor directly, one still might need to know the DoF, for example based the a CoC in terms of sensor pixels.

In any case, the definition of the CoC is just too fuzzy, to argue about right and wrong.
 
  • #39
A.T. said:
I can see some uses for it:

If the final viewing conditions are unknown, one still might want to compare different optics in terms of DoF, for example based on a CoC defined as a fraction of the original image size.

If the footage is used for automated digital image processing that reads the sensor directly, one still might need to know the DoF, for example based the a CoC in terms of sensor pixels.

In any case, the definition of the CoC is just too fuzzy, to argue about right and wrong.
I probably wouldn't call this comparing optics in terms of DoF, but just comparing optics in terms of the the CoC as a fraction of the sensor size (which is how the "Zeiss Formula" defines the CoC). This now becomes all semantics... I agree, not really a point in arguing over it.
 
  • #40
Oldhouse said:
I probably wouldn't call this comparing optics in terms of DoF, but just comparing optics in terms of the the CoC as a fraction of the sensor size (which is how the "Zeiss Formula" defines the CoC).
The Zeiss Formula gives you the acceptable CoC based on sensor size, which you then can plug into the DoF formula to compute and compare the DoF for different optics. None of that requires knowledge of the actual viewing conditions.
 
  • #41
A.T. said:
The Zeiss Formula gives you the acceptable CoC based on sensor size, which you then can plug into the DoF formula to compute and compare the DoF for different optics. None of that requires knowledge of the actual viewing conditions.
No, you forget that the Zeiss Formula was derived from the DOF markings of an Zeiss Triotar lens, and the DOF markings were put on the lens with a CoC calcululated with a viewing distance that is equal to the picture diagonal. I'm done talking about this... We are going in circles.
 
  • Like
Likes sophiecentaur
  • #42
Oldhouse said:
No, you forget that the Zeiss Formula was derived from the DOF markings of an Zeiss Triotar lens, and the DOF markings were put on the lens with a CoC calcululated with a viewing distance that is equal to the picture diagonal.
Of course the formula was defined based on some practical consideration and not just a random ratio. But when it is applied, you don't need any data on the viewing conditions, just the sensor size. Also, for relative comparison of two optical systems it doesn't matter if you use the Zeiss-CoC or say twice the value, because it's about the ratio of the resulting DoF.

And for automated digital image processing there is no human viewing intended at all. Some algorithm reads the pixel values directly, but is limited by the CoC in terms of pixels, which then gives you the limits in terms of DoF.
 
  • #43
Oldhouse said:
TL;DR Summary: What are the actual factors that influence depth of field in photography?

The DOF we get in a picture is different if we record it on a small piece of film or on a large piece of film because we need to enlarge the small piece of film more than we enlarge the large piece of film to display it on our TV. Therefore the statement "it doesn't matter how big the wall is, it hasn't changed the depth of field" because "the image has already been formed, as it goes through the lens, before it hits the sensor" seams to be nonsensical.

then you misunderstand depth of field

The depth of focus is the relative sharpness of details at the image plane, and depth of focus is a fixed characteristic of an image. It is the non-variable component of depth of field; and it is dictated only by the physical size of the aperture opening/entrance pupil (not the f/#).

Depth of field is not a fixed aspect of an image; so when people talk about depth of field as a fixed aspect, they are really talking about the depth of focus.

I agree with Andy, having done photography for more years than I can to remember.

Depth of field and depth of focus are essentially the same thing, and the guy's comments in the video are correct when stating that the sensor size is irrelevant.

Dave
 
  • Like
Likes Gleb1964
  • #44
davenn said:
then you misunderstand depth of field



I agree with Andy, having done photography for more years than I can to remember.

Depth of field and depth of focus are essentially the same thing, and the guy's comments in the video are correct when stating that the sensor size is irrelevant.

Dave
I would say you misunderstand depth of field. In post #15, another member even posted the formula which makes it obvious that sensor size has an influence on DOF. Furthermore, you can simply use any depth of field calculator (for example: https://dofsimulator.net/en/) and test it yourself. Leave all parameters untouched except for the sensor size and you will see yourself that DOF changes.
 
  • #45
At the risk of beating a dead horse, I thought of a different perspective that might clarify how the size of the image (a non-subjective measure) can impact the depth of field (a subjective measure*).

* it is possible in some circumstances to quantify a maximum permissible defocus error; in fact this calculation is required for machine vision applications.

Specifically, consider display technology. In this analogy, the minimum size of a printed 'dot' or a display pixel approximately corresponds to the size of a circle of confusion. In this analogy, complexities of optical systems and image recording are not relevant; additionally, a quantitative metric of interest (dots per inch or pixel pitch) is always normalized to the subjective nature of human vision and the "optimal" viewing distance.

I looked up the specifications for the following and as best I could, converted everything to dots per inch (dpi):

Las Vegas Sphere (outside): 0.8 dpi
The Humungotron (located in my hometown): 100 dpi
Typical roadside billboard displays: 20 dpi
desktop/laptop monitor(**): 100-200 dpi
phone screen: 300-600 dpi
high-end printing services: 600-1200 dpi

(**) :) if you are looking at this on a specialized 8k or 12k display, you should already understand CoC so why are you wasting your time reading this? :)

The important observation is that for each of these, the image is considered "sufficiently sharp". For example, the ratio of laptop and phone dpi is nearly equal to the ratio of viewing distances for the two, showing how viewing distance (image size) impacts depth of field.

I think this analogy helps understand how depth of field (which depends on the size of the circle of confusion) scales with image size, which is a concept that can seem nonsensical or at least counter-intuitive.
 
  • #46
Drakkith said:
That assumption is the crux of the issue. If both images were displayed on the same monitor but at their native resolutions then the DOF of both would be the same. But since there's an assumption that the second image will be magnified to be displayed at the same size as the first then we must admit that this will reduce the DOF since it involves magnifying the image and all the blurring inherent in that image.
To this thread I chime in with my own skepticism/misapprehensions.

In my ancient world, DoF is about the maximum physical distance between objects in focus.

Stick a stake in the ground every foot from camera to horizon. Near stakes will be out of focus, far ones too. DoF defines the one in focus.

That is determined in-camera.

I fail to see how print resizing or DPI of printout will change which stakes are in focus and which stakes are out of focus. OK, unless a distant stake is only 2 printer dots wide...

Likewise, I can see how sensor size might cause some antialiasing artifacts that will overlap with DoF, since a distant stake that's only 2px wide will seem out of focus


... but I wouldn't have confabulated DPI or PPI with DoF. That seems sloppy to me.
 
  • #47
DaveC426913 said:
That is determined in-camera.

I fail to see how print resizing or DPI of printout will change which stakes are in focus and which stakes are out of focus. OK, unless a distant stake is only 2 printer dots wide...

It is not. Go to post #8 and read the posted link for a pretty easy to understand explanation.
 
  • #48
Oldhouse said:
It is not. Go to post #8 and read the posted link for a pretty easy to understand explanation.
OK. so I was correct. Today's DoF is a mixture of what I will call 'optical DoF' versus 'perceptual DoF'.

"Despite the debate, there is a standard in the photography industry : when looking at a print of 17cm*25cm at an optimum viewing distance of 25cm, a blur circle of less than 0.2mm diameter is seen as a dot and not a circle anymore. This is the diameter of the Circle of Confusion, the largest circle still perceived as a sharp point by the human eye. By default, the DoF is defined relatively to this degree of sharpness."

IOW, if it looks blurry (for whatever reason); it is declared to be blurry.

I still say this seems sloppy. Or at least application-specific. (By that I mean, it is less important what the actual causes of DoF are than what the effects/consequences of it are in the final format.) Pjut another way, it;s become a practical, engineer-y factor, rather than a theoretical science-y factor. A loss of data there.

The implication, as I see it, is that, by this definition, DoF can be affected by almost innumerable subjective, spurious artefactal factors:
  • looking at a photo in dim light will affect its DoF (because: pupils)
  • looking at a photo that has been printed with poor colour-calibraton on an inkjet printer will affect its DoF (by smearing out the dots. All the dots.)
  • looking at a photo through a translucent sheet of paper will change the DoF, because the smallest discernible dot-as-opposed-to-circle is now huge.
  • There is literally no limit to number of ways and conditons that can potentially alter the size of the smallest discernible dot in a picture when viewed after-the-fact.

These factors above affect the entire image equally - the entire pic equally - regardless of pic subject. (Every dot is enlarged by the same factor, it's just that some dots cross the CoC threshold and some do not.)

But we already have a term for how a given image's overall sharpness is defined. It's called sharpness.

Why redefine a term that already had a specific useful meaning? DoF used to be about a differential blurriness between disparate parts of the same photo's semantic subject (eg. foreground vs background), as oppsoed to the photo as an item. Does that still have a name?

Before you pound on me: This is a rant. I acknowledge the information inherent in this definition of DoF is useful to the modern photographer; I just don't know why they need to muddle perfectly cromulant technical terms.

I'm an old school photographer, and for me, DoF is an in-camera factor (though it can be rendered in post - by differentially changing the dot sizes). I am perfectly happy to talk about dot-size and sensor size and print rez and all those things. I'm just happy to use their specific technical labels so that they do not get lost the the melting pot with other similar, yet distinct, technical artefacts.


Question: did such a definition for DoF precede or follow the advent of digital photography?
 
Last edited:
  • #49
DaveC426913 said:
I fail to see how print resizing or DPI of printout will change which stakes are in focus and which stakes are out of focus. OK, unless a distant stake is only 2 printer dots wide...
Blow up the photograph to a poster that's 20 feet wide and 40 feet tall and viewed from 1 foot away. Now none of the field appears to be in focus.
 
  • #50
Drakkith said:
Blow up the photograph to a poster that's 20 feet wide and 40 feet tall and viewed from 1 foot away. Now none of the field appears to be in focus.
No it doesn't. Out-of-focus is distinctive*. The giant poster appears in-focus but no longer sharp.

* For example, bokeh is a depth-of-focus-effect and depends on actal distance in image, not on image-wide dot-size.

We already have terms that fit the phenomena.

Sharpness affects every element in the image, whether 2 inches or 200 miles. in the aboe example they're all one inch diameter dots. Even the Moon, 200,000 miles away. Why call it depth of field? It makes no sense.

In this definition, your 20x40 photo has a DoF of exactly zero. Nothing is "inside the field of focus". That's a nonsensical outcome.

To-wit:

Man at 20 feet: "I love the photographer's use of depth of field. The bees are in focus while the field of flowers is not."
Man at one foot: "False. The depth of field has collapsed and vanished. This is the worst exhibit ever."

🤔
 
Last edited:

Similar threads

Back
Top