rayjohn01 said:
Since when did the sun go monochromatic ??
8 Mpix is not enough, a 55mm standard lens is useless in capturing scenes which are anything like what the eye sees due to it's peripheral vision.
Peripheral vision is quite useless in what you believe to be "seeing". In the visual process when viewing static images, it's mainly used to decide where to re-target the high-resolution central vision. The imagery from the periphery is very low-res (orders of magnitude lower res than central vision) and to the best of current knowledge is not directly composited into the image that we see.
To a good first order approximation, we see through a 3 degree keyhole. For a rectilinear lens, the field of view for a 35 mm frame size is given by
\gamma={2\over {\tan\[0.035/(2\cdot f)\]}}. So 3 degrees is like looking through a 670mm telephoto on a 35mm camera. That's where all the high-resolution imagery comes from. Everything else is a blur used to decide where to allocate that high-resolution imager next. And that high-resolution imager is a sampled one, and can take maybe 6 samples per second.
Our brain does all the compositing needed to make us perceive a "continuous" visual reality. It's nowhere near continuous!
If you consider a 22mm lens the effect is as follows -- to be viewed correctly ( i.e. with correct perspective) at 10" distance the print should be 12x8".
I was talking about an example application in microscopes, where in unsophisticated applications people typically slap an NTSC color camera. And 8 mpix is all you need if you have a rotating color wheel in front of the CCD, which is the next best thing to having a 3 CCD system. Since you're always subsampling the CCD, the increased sampling rate (say 180 half fields vs. 60 half fields) is not a problem with recent enough CCDs. But I was giving just an example in a particular context.
The eye, as opposed to what is claimed, is 5x more capable in resolution
than the standard of 0.1mm at 10" ( this latter is 250 dpi in the print)
All the numbers are known and there's no such thing as "claimed". People have been cutting and analyzing the retinas for ages now. It's hardly controversial these days. I don't know who claims 250dpi, but that's just a made-up number. The 750dpi is the correct one.
In the central vision there are about 130 cones per degree of visual angle. At 10 inches from the eye that gives you the linear cone size of about 35\mu m. That's about 750 dpi you're talking about. But such high-resolution vision doesn't even span an inch! It barely spans half an inch at that distance.
Recoup: The eye's resolution is highly localized. You have a high resolution, sampled (say up to 6 samples/s) and quite non-realtime central vision region which is a cone with about 3 degree apex angle. Everything else is periperal vision, has very low spatial resolution (but very good temporal one!) and is used mainly for realtime processing in the visual system -- for the peripheral visual reflexes (motion retargeting, optokinesis), for determination of "interesting" points in the scene to move the central vision to, etc.
The implication is that the print should really be at least 750 dpi at 12x8
size -- 54 Mpixels.
Last time I checked there are about 6 millions total cones (colour-sensitive detectors) on your average human retina. Out of which a mere 18 thousand are in the rod-free one degree square of the fovea, and about 200 thousand total in the whole fovea (i.e. central vision area). Numbers vary slightlty depending on who you cite.
Now, if you had 54 million colour-sensitive pixels on your retina, your optic nerve would be too big to fit anywhere. But of course you don't claim that. Assuming (wrongly!) that your eye doesn't move, you'd need 6 million pixels total in order to present imagery to the fullest potential of the retina, period. But the eyes do move, and we do scan the images that we see and that's where your 54 mpixel figure comes from, and it's correct. But it hides something, read on.
The main thing is that in most applications we don't really need the full resolution of central vision. Even your IMAX film doesn't have enough resolution to fully
potentially exploit your visual input potential, but OTOH if you were the only person in the theatre, more than 95% of the film's resolution is completely and hopelessly wasted at any given time.
The whole problem comes from big discrepancy between central and periperal vision's resolution. While our eye only sees about 3 degrees FOV in high-resolution, the rest can be very low resolution. But in order to get that, you'd need a very good realtime eye tracker and machinery to generate such a high-resolution insert in the low-resolution image that you're seeing. Technology exists to do just that, but it's a major pain. You need extremely fast data paths and latencies on the order of milliseconds to do everything. I.e. as soon as you detect that the eye has started moving, you need to estimate where it's going, how soon it will get there (saccade dynamics are pretty well known today), and start moving your high-resolution insert in same direction and following roughly the same dynamics. As they eye moves to its final destination, you have to continually update the hi-res insert's position and dynamics. As soon as the eye stops, the saccadic (central vision relocation) system does an acquisition of the image and ignores any further central vision input for about 150-350ms, depending on visual content and reflex overrides that may happen in the meantime. At the time the post-saccadic acquisition is being done, the hi-res insert better be at the right spot and completely still.
In vision world, things aren't too easy :)
But alas, we have departed from the original lens topic :)
Cheers, Kuba