Optics Question - How is light divided into a 2d image?

AI Thread Summary
Light is divided into a 2D image through the use of a lens, which focuses different intensities and colors of light onto specific areas of a film or CCD sensor. Each pixel on the sensor captures varying light levels based on its position relative to the image, allowing for a complete representation of the scene. The lens plays a crucial role in forming this image, as it ensures that light from different parts of the scene is directed to the appropriate pixels. Without the lens, all pixels would receive similar light information, resulting in a uniform image. Understanding this process can be demonstrated through simple experiments, such as using a magnifying glass to project an image onto a surface.
phys_person
Messages
2
Reaction score
0
The theory seems simple. Light strikes the film in a camera, or a CCD and the different regions of intensity are recorded. But what I can't get my head over, is how a different portion of the film strip or CCD *knows* which part of the image it is supposed to be representing.

For example, if I was to make a series of pinholes in a sheet of paper, I could view out of each one of them a complete image that is nearly identical to the the image from the hole beside it. If I was to try and record some details about the image, such as light intensity, general colour, it would have identical results as the hole beside it. How can a series of readings from a film or CCD be constructed to make a complete image? How can one small piece say it's looking at blue, and another small piece say it's looking at red? I just don't get how the image is divided up like that. When the total light information striking a small point on a 2d plane, should be almost completely identical to another small point beside it.
 
Science news on Phys.org
I think you are confused here, in fact I'm not following your point, but light doesn't just strike film or a CCD in a camera, if that's all that happened then every pixel would show the same brightness and color. The light goes through a lens that forms an image on the surface of the film or CCD, and that image shows different levels of brightness depending on where each pixel is located with respect to the image. It also shows different colors depending on where the pixel is located, if the object that is being imaged by the lens emits or reflects different colors by different amounts. The key here is the lens, without that you don't have a camera.
 
Echoing Jeff, I think you (the OP) is neglecting the function your eye in your thought experiment.
 
It seems as though you are saying the lens breaks up an image into different portions of light across a 2d plane. If so, how can I see pretty much see the same total image out of different portions of the lens in someone's eye glasses? What about light passing through a window? What about a window with a slight bend that is not even noticeable to the eye? Just what kind of changes does the light have to undergo to become a 2d map of an image?

As opposed to just "regular" light that would seem the same from different spots.

Thank you
 
phys_person: The best way to understand what is happening is to do a VERY simple experiment. Do you have a magnifying glass? You can buy a cheap one one (about $1) at your local drug store since the elderly use them for reading. Stand opposite a window in your house, during the daytime so you have plenty of light, and place the magnifying glass a few inches from the wall. You will see an inverted image on your wall of what you see outside. Now imagine a CCD camera placed at the location of the wall.
 
Last edited:
Thread 'A quartet of epi-illumination methods'
Well, it took almost 20 years (!!!), but I finally obtained a set of epi-phase microscope objectives (Zeiss). The principles of epi-phase contrast is nearly identical to transillumination phase contrast, but the phase ring is a 1/8 wave retarder rather than a 1/4 wave retarder (because with epi-illumination, the light passes through the ring twice). This method was popular only for a very short period of time before epi-DIC (differential interference contrast) became widely available. So...
I am currently undertaking a research internship where I am modelling the heating of silicon wafers with a 515 nm femtosecond laser. In order to increase the absorption of the laser into the oxide layer on top of the wafer it was suggested we use gold nanoparticles. I was tasked with modelling the optical properties of a 5nm gold nanoparticle, in particular the absorption cross section, using COMSOL Multiphysics. My model seems to be getting correct values for the absorption coefficient and...
Back
Top