Why do we see distant objects?

  • Thread starter mnb96
  • Start date
  • #1
713
5
Hi,

I would like to get rid of the following misunderstanding.
Suppose we are looking to an idealized flat diffuse emitter perpendicular to the line of sight (like a wall). If we always face this wall but slowly walk away from it, the radiant flux that we receive from a small patch of that surface is supposed to decrease according to the square distance. In fact, the solid angle spanned by this small patch will decrease according to 1/r2.
However the object will appear having the same brightness instead of progressively fading to black, and become invisible....why?
 

Answers and Replies

  • #2
DaveC426913
Gold Member
18,975
2,466
Hi,

I would like to get rid of the following misunderstanding.
Suppose we are looking to an idealized flat diffuse emitter perpendicular to the line of sight (like a wall). If we always face this wall but slowly walk away from it, the radiant flux that we receive from a small patch of that surface is supposed to decrease according to the square distance. In fact, the solid angle spanned by this small patch will decrease according to 1/r2.
However the object will appear having the same brightness instead of progressively fading to black, and become invisible....why?
Because the number of photons per cross-sectional area of the image has not changed.

Imagine a CCD camera capturing photons.
When the object covered a 5x5 area of pixels, it was receiving 25 "photons" of light (one per pixel) over some arbitrary unit of time.
Step 5x farther away and the object now only covers 1 pixel; it is now receiving 1 "photon" of light over the same time.
Same amount of light per unit area, same brightness.
 
Last edited:
  • #3
713
5
Hi Dave,
thanks for your answer. I think I somewhat intuitively understand your example, but I still have many troubles translating it into formulas.

Let's consider the same scenario as I described before, but now instead of a wall, consider a tiny (infinitesimal) piece dS of the same wall that still behaves as a Lambertian emitter, and we call L the emitted radiance for an arbitrary direction.
Pretend also that our CCD sensor is made of infinitesimal "pixel-units" dA.

If we calculate the irradiance received by the "pixel" dA placed right in front of dS we have:
[tex]dE_a = \int_{\Omega} L \cdot cos(\theta) d\omega[/tex]

where [itex]\Omega[/itex] is the hemisphere of directions originating from dA, and [itex]d\omega[/itex] is the infinitesimal solid angle in some direction of the hemisphere.
The quantity in the integrand will be zero for all the directions, except for the one that intercepts dS, so we have:

[tex]dE_a = L \cdot dS_{proj} = L \frac{dS}{r^2}[/tex]

which again suggests that the (ir)radiance at the pixel should decrease with the squared distance, and tend to 0 when walking away from the source.

Obviously there must be a mistake in this reasoning, but at the moment I can't spot it.



*** EDIT: ***
Probably the exiting radiance L was supposed to be treated as a function L(r) of the distance, and not a constant. Basically if L0 was the radiance received by dA in the "starting position", then increasing the distance will yield [itex]L(r)=L_0\cdot r^2[/itex], which cancels the other [itex]r^2[/itex] in the above equation, yielding [itex]dE = L_0 dS[/itex]
Is this correct?
 
Last edited:
  • #4
phyzguy
Science Advisor
4,651
1,587
Hi,

I would like to get rid of the following misunderstanding.
Suppose we are looking to an idealized flat diffuse emitter perpendicular to the line of sight (like a wall). If we always face this wall but slowly walk away from it, the radiant flux that we receive from a small patch of that surface is supposed to decrease according to the square distance. In fact, the solid angle spanned by this small patch will decrease according to 1/r2.
However the object will appear having the same brightness instead of progressively fading to black, and become invisible....why?
Assuming the emitter has finite extent, then the total flux(measured, for example, in Joules/sec) that we receive from the emitter will decrease as 1/r^2. However the brightness of a given patch (astronomers call this the surface brightness, which is measured in Joules/sec/steradian or Joules/sec/arcsecond^2) remains constant, for the reason that DaveC described. So this means that the surface brightness of a distant star is basically the same as the surface brightness of the sun. However, we receive a lot more total flux from the sun because its apparent size is much larger. Does this help?
 
  • #5
3,872
88
Very interesting question + answers, I had never thought of that issue! :rolleyes:

Thus if I understand it correctly, the intensity of a far away object should be the same as long as it covers more than one pixel (or the eye or camera), but should be reduced if it covers less than one pixel. That would explain why a far away sizable object has (almost) the same intensity but a point source such as a star has ever less intensity at greater distance.
 
  • #6
713
5
Hi Phyzguy!

thanks! your explanation helped a lot.
Indeed when I posed this question I was also thinking of why we can see stars that are so far away from us. Although the explanation in terms of flux per time per steradian makes perfect sense, I still find this phenomenon quite peculiar and interesting. Easy to accept, but kind of difficult to visualize.

The first reaction when I faced this problem was to think that if we receive less flux, and so less energy, we should also perceive a lower brightness. I don't know yet how our eyes manage to keep track of the flux per steradian.

Two last questions:

1) to me all this seems to be valid also if we drop the assumption that the surface is lambertian. It should hold true also if we look straight to a "laser pointer", am I right?

2) if you have time, can you confirm that the mathematical reasoning I made is correct?


Thanks again
 
  • #7
phyzguy
Science Advisor
4,651
1,587
The point you're missing is that if your detector measures a certain solid angle and you move the object further away, you are seeing more of the emitter, and the area of the emitter that you see increases ~r^2, and this cancels the 1/r^2 factor of the intensity, so the surface brightness stays constant. Try looking at figure 2 at this web site:

http://www.astro.lsa.umich.edu/undergrad/labs/brightness/index.html [Broken]
 
Last edited by a moderator:
  • #8
713
5
[if] you move the object further away, you are seeing more of the emitter...
but what about the scenario I described, where the emitter is an infinitesimally small surface patch, and there is nothing around it?
 
  • #9
rcgldr
Homework Helper
8,715
539
With sufficient distance, the light does get dimmer. Planets reflecting sunlight will appear brighter than disant stars, even though those distance stars would be much brighter if they were as close as the planets we can occasionally see at night.
 
  • #10
sophiecentaur
Science Advisor
Gold Member
25,194
4,816
If you take a photo of the Moon and want to expose it correctly, you have to give it the same exposure as an object on Earth in 'bright sunlight' - for all the above reasons, the total received light doesn't drop off as the square of the distance.
There is a distance beyond which the bright object appears like a point source (you can't resolve its shape). As you go further away from this, the inverse square law is all that can operate so it looks fainter and fainter. The 'Magnitude' of a star has nothing to do with its diameter - just its total power output spread over a sphere with us at the surface.

The (easily visible) planets are interesting to observe. They look 'different' somehow and this is because they actually subtend a resolvable angle. They look like visible Moons rather than stars because there is a significant range of angles of the light reaching your eye. (They don't 'twinkle' either). Optical 'infinity' can be really quite a long way away.
 
  • #11
phyzguy
Science Advisor
4,651
1,587
With sufficient distance, the light does get dimmer. Planets reflecting sunlight will appear brighter than disant stars, even though those distance stars would be much brighter if they were as close as the planets we can occasionally see at night.
I think you're missing the point. Everything gets dimmer with distance, with the brightness falling off as 1/r^2. The point is that the surface brightness is constant. The surface brightness is the brightness of a small patch of whatever is emitting the light. The object emitting the light gets dimmer because it gets smaller, not because its surface brightness is reduced.
 
  • #12
sophiecentaur
Science Advisor
Gold Member
25,194
4,816
I think you're missing the point. Everything gets dimmer with distance, with the brightness falling off as 1/r^2. The point is that the surface brightness is constant. The surface brightness is the brightness of a small patch of whatever is emitting the light. The object emitting the light gets dimmer because it gets smaller, not because its surface brightness is reduced.
Yebbut the perceived brightness is based on the same solid angle so an illuminated wall will produce the same brightness of image on a camera sensor (or your eye) power per unit area of image for a huge range of distances but the image just gets smaller - see my comment about photographing the Moon.
 
  • #13
phyzguy
Science Advisor
4,651
1,587
but what about the scenario I described, where the emitter is an infinitesimally small surface patch, and there is nothing around it?
Again, I think you are confusing brightness with surface brightness. Suppose the object has a total luminosity L, has an infinitesimally small area dA, and you are a distance r away. The luminosity L is spread over a surface area of 4*pi*r^2, so you will measure a flux of L/(4*pi*r^2). So the brightness of the object decreases as 1/r^2. The area dA at your distance r subtends a solid angle of dA/(4*pi*r^2), so the surface brightness, which is the flux divided by the solid angle is given by (L/(4*pi*r^2)) / (dA/(4*pi*r^2)) = L/dA, and is independent of distance.
 
  • #14
713
5
Hi again,
thank you all for your help and explanations.

@phyzguy: it seems to me that your example, and also the first example of the CCD sensor given by Dave essentially work, because if you "watch" a certain source of light from dA through a constant solid angle, you get the effect that by walking away from the object the radiant flux decreases, but at the same time, the solid angle through which we "see" will intercept a larger area of the object, and these two factors cancel the effect of each other.

What is causing me troubles, is the following. If we continue walking farther and farther away from the object, we will reach a point when the solid angle intercepts all the visible surface of the object. After that point, the flux will decrease with the distance, but the visible area of the object will not increase anymore, so the object should progressively fade away. Am I right?

For example Dave gave a reasonable example with the CCD sensor, but he stopped at the point when the whole object is exactly intercepted by a single pixel. I guess, that if we keep on going away from that object, what the camera sensor records, is a single pixel but with decreasing intensity. Is this correct?

Finally, it seems to me that brightness, surface brightness, and magnitude are non-standard terms in radiometry. Also luminosity is used in a strange way. Would it be too much to ask for translating those terms in the corresponding following terms: radiant energy, radiant flux, radiant exitance, radiant intensity, irradiance, radiance?

Thanks!
 
Last edited:
  • #15
3,872
88
[...] For example Dave gave a reasonable example with the CCD sensor, but he stopped at the point when the whole object is exactly intercepted by a single pixel. I guess, that if we keep on going away from that object, what the camera sensor records, is a single pixel but with decreasing intensity. Is this correct?[...][/I]?

Thanks!
Yes, that's also what I concluded in post #5. With a bigger telescope you see more stars because more light is concentrated on each pixel. Nice discussion. :smile:
 
  • #16
713
5
Alright, now things are starting to become clearer.
The only thing we didn't discuss about is: what determines the "amplitude" of that constant solid angle through which light "travels" before hitting a "sensor unit"?

As you said with an ordinary digital camera (or our eyes) we can't see stars that are too distant from us. In my mental picture this happens because I imagine a sort of very thin and infinite cone originating from every point of the sensor.
With a large telescope, I imagine that those infinite cones I talked about are much much thinner.

Supposing we have an ideal sensor with an infinite amount of infinitesimally small "pixels", I concluded that the "amplitude" of those cones (solid angles) is somewhat determined by the lenses, but how?
 
  • #17
sophiecentaur
Science Advisor
Gold Member
25,194
4,816
The base of the 'cone' is the pupil of your eye / telescope - that defines the 'power gathering' factor. It is much greater for a 1m telescope, of course.

It just struck me that another way of looking at this is to imagine the illuminated source to consist of a matrix of lamps. As you move away, the received energy from each lamp will decrease as the inverse square but the geometry of the focussing system will produce more lamp images on each unit area of the final image (the area of the image will decrease as 1/d^2). So the power falling on a unit area of the image will be the same.

This process will fail once there is only one pixel (or in fact a small group of pixels, defined by the diffraction limit of the optics), after which this 'compensation' mechanism ceases to work and the remaining pixel receives power proportional to 1/d^2.

Reading your last post again, I think you are approaching this problem from the eye rather than from the source.
 
  • #18
713
5
Hi sophiecentaur!

I am still a bit confused on this point.
For instance, now I am putting the tip of the pen right in front of my left eye, and the tip of the pen looks very blurry. If I make a physical effort (that actually hurts a bit) I can focus and see perfectly the tip of the pen.

What is happening in this scenario in terms of those imaginary solid angle "cones" I was talking about?
Am I changing their "amplitude"?
 
  • #19
DaveC426913
Gold Member
18,975
2,466
Hi sophiecentaur!

I am still a bit confused on this point.
For instance, now I am putting the tip of the pen right in front of my left eye, and the tip of the pen looks very blurry. If I make a physical effort (that actually hurts a bit) I can focus and see perfectly the tip of the pen.

What is happening in this scenario in terms of those imaginary solid angle "cones" I was talking about?
Am I changing their "amplitude"?
Are you specifically asking about the object being blurry? It's blurry because the rays are not focusing at a point on your retina, they're smeared out because your eye can't focus that close.

It has nothing to do with the cone model except inasmuch as the cone model assumes that the apex of the cone is a point.
 
  • #20
713
5
Are you specifically asking about the object being blurry? It's blurry because the rays are not focusing at a point on your retina, they're smeared out because your eye can't focus that close.
ok...I was just thinking that if we capture one image with our digital camera, and assuming the image looks perfectly in-focus, each pixel will record the radiance hitting the sensor, from a specific point, from a specific direction, and through a certain solid-angle. What is the amount of this solid angle? What determines it?
 
  • #21
DaveC426913
Gold Member
18,975
2,466
ok...I was just thinking that if we capture one image with our digital camera, and assuming the image looks perfectly in-focus, each pixel will record the radiance hitting the sensor, from a specific point, from a specific direction, and through a certain solid-angle. What is the amount of this solid angle? What determines it?
The lens size. Most digital camera use a moderately wide angle lens of 35mm to 28mm. This captures a specific size of image on the focal plane.
 
  • #22
713
5
Ok, thanks!
 
  • #23
sophiecentaur
Science Advisor
Gold Member
25,194
4,816
assuming the image looks perfectly in-focus, each pixel will record the radiance hitting the sensor, from a specific point, from a specific direction,
One area of your retina doesn't just receive light from a 'specific point' it receives it from an AREA. If you take the object further away, you will see a bigger area of the object on the same area on you retina.

The focussing thing is a different issue. You may want to sort that out too but don't bring it into this question. It's a bit of a red herring.
 

Related Threads on Why do we see distant objects?

  • Last Post
Replies
1
Views
2K
Replies
9
Views
2K
Replies
3
Views
1K
Replies
5
Views
16K
Replies
1
Views
1K
Replies
1
Views
2K
Replies
11
Views
1K
Replies
10
Views
796
  • Last Post
Replies
8
Views
16K
  • Last Post
Replies
24
Views
3K
Top