What Happens to Photographic Clarity Without a Lens?

  • Thread starter Thread starter luckis11
  • Start date Start date
  • Tags Tags
    Optics
AI Thread Summary
Photographic clarity without a lens results in a lack of focused images, leading to blurriness or indistinct patterns on the film. The discussion highlights that without a lens, rays from various points in a scene do not converge correctly on the imaging plane, preventing a clear representation of the image. The concept of diffraction patterns is explored, noting that different wavelengths can affect image quality, but a coherent image cannot form without proper focusing. Questions arise about visibility in different colored environments and the role of lenses in human vision, emphasizing the necessity of lenses for capturing clear images. Ultimately, the conversation underscores the importance of optical principles in photography and image formation.
luckis11
Messages
272
Reaction score
2
If a photographic machine takes a picture using a flat lense instead, what would it photograph? A blur image?

Sorry, not a flat lense, I meant no lense at all. But then there would be a one-slit diffraction pattern depending on the diametre of the opening ?

Then what I want to ask is if the opening was of a size of the side of a cube and the film was on the other side?

Or if there was no cube at all but just the film? The film would burn? Then a film that would not burn?
 
Last edited:
Science news on Phys.org
You would get about the same thing you would get if you left the film out in the open. Just a big mess with nothing visible on it.

You wouldn't get a diffraction pattern as the many many different wavelengths hitting the film all diffract at different angles and such.
 
Cheers, very usefull clue that of the pinhole camera. So, what happens if no box-camera is in front of the film?

Actually my question is how can a blue ball be visible in a red room, from every angle of view. Defining "visible" I end up to a film without a box as the initial conception of the problem. Suppose that the film could picture a blue circle within red backround, from every angle of view. The blue circle would be greater and greater as the distance film-ball gets smaller and smaller, therefore there is a kind of "blue" shadow within the "red". I guess the cone-shadow would be because of the superposition of "blue" and "red" waves, BUT there is a cone-shadow of "blue" waves surrounded by "red" waves from EVERY angle of view! I have a difficulty of formulating the un-understandable for the moment, but here's a try: Why by shifting the angle of view a little, the cone that defines the circle also moves a little? You'll tell me "because the red waves which are inhibited by the ball and which are vertical to the film are not the same from every angle of view"? But this, if it is not wrong, is only a clue for the answer. There were "blue" waves at the previous shadow, why did they disappear at the new angle of view? They did not dissapear? But doesn't the previous answer imply that they dissapeared? Something like that is my un-understable which I cannot yet define well.
 
Last edited:
luckis11 said:
Actually my question is how can a blue ball be visible in a red room, from every angle of view. Defining "visible" I end up to a film without a box as the initial conception of the problem.

Aren't you forgetting about the lens in your eye?
 
Forget it, wrong questions anyway.
 
There's blurness behind the lens except a particular distance behind the lens. This means that there's blurness in front of the lens, yes or no? And, why yes, or why no? Obviously no because seeing without an (outside thw eye) lens, there's no blureness, but we need a lens to see and to photograph, so...?
 
luckis11 said:
This means that there's blurness in front of the lens, or not?

Inasmuch as, if you placed an imaging plane there, it would not produce a focused image, yes.

luckis11 said:
And why?
Because an image can only form when the rays falling on point X on the imaging plane all came from point X' in the scene (and no rays from points Y' or other - do). Nowhere does this occur except at the focus of the lens.

If too many rays from disparate parts of the image fall on a given image point, that point will not decipherably represent the image.
 
That was exactly the answer I wanted. Greatfull.
 
  • #10
luckis11 said:
That was exactly the answer I wanted. Greatfull.
Credit me on your homework assignment. :biggrin:
 
  • #11
I guess that the rays from the left side outside end to the right side inside, as the up rays outside end to the down side inside? And that this happens to the pinhole camera too?
 
  • #12
luckis11 said:
I guess that the rays from the left side outside end to the right side inside, as the up rays outside end to the down side inside? And that this happens to the pinhole camera too?

Generally, yes. They tend to invert the image.
 
  • #13
if he snaps at a very close object it will be slightly blurred and viz.versa
 
Last edited:
  • #14
DaveC426913 said:
Inasmuch as, if you placed an imaging plane there, it would not produce a focused image, yes. Because an image can only form when the rays falling on point X on the imaging plane all came from point X' in the scene (and no rays from points Y' or other - do). Nowhere does this occur except at the focus of the lens. If too many rays from disparate parts of the image fall on a given image point, that point will not decipherably represent the image.

Certainly such a thing is implied by optics. So, at the surface of a spherical mirror there's no image of the tree because rays from all parts of the tree fall on every point of the surface?
 
  • #15
luckis11 said:
Certainly such a thing is implied by optics. So, at the surface of a spherical mirror there's no image of the tree because rays from all parts of the tree fall on every point of the surface?

I don't know about 'all' and 'every' but certainly there is an extremely strong correlation between 'the light from one point of an object falling on only one point of the imaging surface' and 'the quality of the resulting image'.
 
  • #16
Figures 1-3 show the principle of an artificialhttp://en.wikipedia.org/wiki/Perspective_projection_distortion

This article talks about many more "figures". Where are they?
 
  • #17
Drakkith said:
You wouldn't get a diffraction pattern as the many many different wavelengths hitting the film all diffract at different angles and such.
You ALWAYS get a diffraction pattern (in the strict sense of the term). The image produced by a lens is just as much a diffraction pattern as young slits produce - it's just not full of 'identifiable fringes'. When you have large apertures, it is usually possible to get a good idea of what goes on by using simple ray tracing.
 
  • #18
DaveC426913 said:
I don't know about 'all' and 'every' but certainly there is an extremely strong correlation between 'the light from one point of an object falling on only one point of the imaging surface' and 'the quality of the resulting image'.
That couldn't have been said better by Sir Humphry- I like it - well done!
 
  • #19
Nobody knows the answer to my last question? Then, somebody can tell wikipedia to correct its mistake?
 
  • #20
luckis11 said:
Nobody knows the answer to my last question? Then, somebody can tell wikipedia to correct its mistake?

You can correct it yourself...
 
  • #21
What's my mistake?
 
  • #22
http://egsc.usgs.gov/isb/pubs/MapProjections/projections.html
http://paulbourke.net/miscellaneous/domefisheye/fisheye/
http://mesh.dl.sourceforge.net/project/stellarium/Stellarium-user-guide/0.10.2-1/stellarium_user_guide-0.10.2-1.pdf

So, the one fish-eye projection is the Orthographic Azimuthal, and the other the Equidistant Azimuthal? Or not because...?
 
Last edited by a moderator:
  • #23
I'm sorry. I have lost track of exactly what you are asking. Perhaps you should reformulate it as a new topic. It does not seem to be about 'simple optics' anymore.
 
  • #24
The projection of the spherical map of the stars onto a disk. There are various map projections, and they describe the fisheye map projection as different from the Orthographic Azimuthal, or the the Equidistant Azimuthal, but at the above sites they describe it as if it is not different? I guees they mean that a fisheye lens can be adjusted to produce these two types of projections (a bit weird that), too. It's just that I cannot find a good site explaining the photographic-perspective map projections as well as this site explains map projections:
http://egsc.usgs.gov/isb/pubs/MapPro...ojections.html
 
Last edited:
  • #25
luckis11 said:
There's blurness behind the lens except a particular distance behind the lens. This means that there's blurness in front of the lens, yes or no? And, why yes, or why no? Obviously no because seeing without an (outside the eye) lens, there's no blureness, but we need a lens to see and to photograph, so...?

DaveC426913 said:
Inasmuch as, if you placed an imaging plane there, it would not produce a focused image, yes. Because an image can only form when the rays falling on point X on the imaging plane all came from point X' in the scene (and no rays from points Y' or other - do). Nowhere does this occur except at the focus of the lens. If too many rays from disparate parts of the image fall on a given image point, that point will not decipherably represent the image.

But if this is so, then in these photographs:
http://en.wikipedia.org/wiki/Bubble_chamber
http://en.wikipedia.org/wiki/Cloud_chamber
the "particle track - line of bubbles" shouldn't be able to be photographed so defined, but blureness should be photographed instead, since no lens was used! Correct?
 
Back
Top