Why can't we form an image with just one ray of light?

Click For Summary
SUMMARY

The discussion centers on the formation of images in geometrical optics, emphasizing that an image is formed where multiple light rays converge, rather than from a single ray. The intensity of light at these convergence points enhances visibility but does not solely define image formation. Key concepts include the role of phase differences in wave interference and the necessity of multiple rays for a clear image, as demonstrated through examples involving mirrors and lenses. The conversation also touches on diffraction and the optimal aperture settings for cameras to balance light collection and image clarity.

PREREQUISITES
  • Understanding of geometrical optics principles
  • Familiarity with wave interference and phase differences
  • Knowledge of lens and mirror optics, including parabolic and spherical reflectors
  • Basic concepts of diffraction and its impact on image quality
NEXT STEPS
  • Study Fermat's Principle in detail to understand light paths
  • Learn about the effects of chromatic and spherical aberration on image quality
  • Explore the relationship between aperture size and image sharpness in photography
  • Investigate diffraction patterns and their influence on optical clarity
USEFUL FOR

Students and professionals in optics, photographers seeking to improve image quality, and anyone interested in the principles of light behavior and image formation.

ashishsinghal
Messages
462
Reaction score
0
In geometrical optics we consider that an image is formed at the point where two rays meet. But meeting of two rays will just create a change in intensity, that too will change according to phase difference.

Also if the intensity say doubles it will make the image better visible but we cannot say that an image is formed only there. I'm confused. Please help...
 
Science news on Phys.org
Due to the nature of wave-like motion, when to waves come together that are in phase with each other, the waves add up into a larger, more intense wave.

I'm a bit confused, what do you mean when you say that an image isn't only formed there; do you mean the reflection off an object where waves unite and bounce off?
 
I mean that the image is considered only at the points where waves add up. In that case the image will be intensified let say 4 times. But if we consider an image of intensity 4I we must also consider images at the points where intensity is lesser say I, 2I, etc.

Also how can we be so sure that the light waves are in phase.
Take the example of a concave mirror having mirror on only one part of the principal axis. Here we can easily see that the light rays are traveling different distances.
 
If you use a parabolic reflector, the distances along all rays to the focus is the same. That's one way of actually constructing a paraboloid. A spherical reflector is 'near enough' for many purposes (not the HST, however for instance)
Look at Fermat's Principle on Wiki.
 
ashishsinghal said:
In geometrical optics we consider that an image is formed at the point where two rays meet. But meeting of two rays will just create a change in intensity, that too will change according to phase difference.

Also if the intensity say doubles it will make the image better visible but we cannot say that an image is formed only there. I'm confused. Please help...

Geometric optics is a high frequency approximation. So a rough approximation with geometric optics will ignore the phase of the light. This is because the phase varies so quickly over the length scales of our problems that it is assumed to be random. There are, however, other high frequency methods that do take into account the signal's phase. Geometric optics can take this into account too if you work with a more rigorous mathematical approach.
 
ashishsinghal said:
In geometrical optics we consider that an image is formed at the point where two rays meet. But meeting of two rays will just create a change in intensity, that too will change according to phase difference.
Intensity has nothing to do with it. The image of a point is the only place where the light from it gathers to a point. Anywhere else, and the light will be spread out. (i.e. blurry)
 
Why do we need multiple light rays from a point to gather its image. Why can't it be done with a single ray? In other words,

What happens when two rays (from same point) meet that make its image visible?
 
ashishsinghal said:
Why do we need multiple light rays from a point to gather its image. Why can't it be done with a single ray?
There are rays going in all directions from the source, whether or not we draw them on paper. Every ray that meets your eye contributes to what you see.

If all of those rays meet your eye at the same point, you see a point-like object.

If all those rays meet your eye in almost the same point, you see a blurry object.

If those rays meet your eyes everywhere, you just see light without any discernible shape.


We call the place where the first case happens the "image" because if our eye is there, we see something that actually looks like the source.


What happens when two rays (from same point) meet that make its image visible?
Every ray that passes through the ideal lens passes through the image. We draw two rays because that's enough information to find it.



Maybe an example with reflection would be easier to understand.

Suppose we have a lamp with a light-bulb.

If the lamp shines on a mirror and we look, we see a light-bulb because the light is hitting our eye in only one place coming from one direction.

If the lamp shines on a wall, we see an illuminated wall rather than a light-bulb, because the light is hitting our eye everywhere from all directions.
 
ashishsinghal said:
Why do we need multiple light rays from a point to gather its image. Why can't it be done with a single ray? In other words,

What happens when two rays (from same point) meet that make its image visible?

The more light that gets into your eye, the brighter the image is. Same for a camera. If you use a large aperture on your lens, you can get a picture in lower level light because more of those 'rays' get onto the sensor. You can use a pinhole and form a dim image of an object (effectively using only one ray from each point on the object).

The quality of the image depends on how many of the rays actually get to the same spot. (That's in terms of ray optics, which are affected by things like chromatic and spherical aberration). There is another issue, involving diffraction (the wave aspect), and that determines when a sharp edge appears as a sharp edge or a blur / fringes. In general, the wider the aperture, the less blurring is caused by diffraction but the more impairments are caused due to other defects. Big telescopes are big in order to collect more light but also to improve the acuity / resolving power. aamof, there is always an optimum aperture for a camera, where ray optics and diffraction impairments are least worst and it always seems to be about f8.
 

Similar threads

Replies
12
Views
1K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 24 ·
Replies
24
Views
3K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K