Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Rotating Lens for optimal viewing

  1. Dec 5, 2004 #1
    Rotating microscope lens

    I have a question regarding spinning clear lens's on microscopes.

    If you spin a lens at a great velocity will it somehow refract light/electrons that comes up from below allowing for a better image of say any object on a slide for viewing ? In Microbiology we have to change lens's to see farther than the previous lens because of size and the amount of light that refracts back up throught the lens. Sincerely Dymium
  2. jcsd
  3. Dec 8, 2004 #2
    You change lenses (NOT lens's) because their focal length and magnification are bound together in an inverse relationship. It's very hard to design a microscope objective lens so that the focal length could be variable. Instead, they have fixed focal length, thus fixed magnification, and if you want to see things bigger (zoom in), you have to switch lenses.

    Now if you had a lens made out of plastic or somesuch and you'd spin it very fast, it would deform due to centrifugal force. I doubt this would be very useful in a typical microscope, as the lenses are so tiny that in order to deform them in a useful way way you'd need to spin them at several tens of thousands RPM, or even several hundred thousands RPM. It's not exactly trivial :) And since the lenses would need to maintain "reasonable" shape throughout deformation, it's likely they'd need to be made from material that has non-homogenous stiffness in at least one direction in the polar coordinate system (say axial), and that's kinda hard to do while preserving optical quality. No doubt such things could be numerically modeled and a workable design obtained, but I just don't see it on everyone's desks just yet :)

    Where did you get the idea from in the first place, BTW?
  4. Dec 8, 2004 #3
    I got the idea from an old quantum mechanics book I borrowed from a Dentist in the Navy in 1991. Also while studying MMPT manuals Microbiologically Monitored __ __ is been a long time cant remember the rest of its meaning, anyways I was reading about mirrors and glass and why we see our reflection in the quantum book. Later on while looking in the microscope just began to wonder how could I observe greater detail and reduce the amout of reflection light/electrons from coming back up through the lens with out changing lenses. Hmm, I wonder if a 2 way mirror can be made into a lens, that would reflect light also ?
    Last edited: Dec 8, 2004
  5. Dec 8, 2004 #4
    In the mid 1960’s I thought of a liquid filled lens where the focal point would be changed by increasing the internal fluid pressure within a flexible chamber. A patent search revealed a similar concept was patented in 1901. It cost me almost 2 weeks income to find that out.

    I still believe a varying pressure liquid filled lens might be useful but the design problems would be enormous to get a distortion free result. It seems to me it would require several diaphragms within the lens structure, each laser ablated to obtain proper flex. Each diaphragm would compensate for the other’s distortion.

    My thought back then was to alter readings glasses to be driving glasses or magnifying glasses. The earpiece would be a liquid filled tube (connected to the lens’s) with a tiny thumbscrew at the end acting as a piston to change the pressure.

  6. Dec 9, 2004 #5
    The current technology ( in development ) uses liquid lenses with two regions oil and water which are controlled by electrostatic means , which determines the shape of their interface , and hence the focal length.
    Spinning is certainly one means -- but I think there are better. But in development.
  7. Dec 14, 2004 #6
    Since nowadays it's pretty trivial to obtain very nice lenses using digitally synthesized holograms, I'd guess it's simpler to have a flat glass disk with e.g. 20 lenses, each with slightly different focal length, simply exposed on a hologram, rather than having an actual lens that changes shape.

    Besides, for many applications it's actually way cheaper to have 100% fixed focal length and just vary the digital zoom based on a CCD that has way too many pixels. E.g. if you have your typical microscope+video camera situation, by using say an 8 megapixel CCD and having NTSC output you can have a 5x zoom without loosing any quality. And every year they seem to add half a megapixel or so :)

    So if you were to actually sell a microscope with a video camera and a 5x optical zoom, it'd be most likely cheaper not to have any optical zoom and just use a better CCD.

  8. Dec 14, 2004 #7

    Since when did the sun go monochromatic ??
    8 Mpix is not enough, a 55mm standard lens is useless in capturing scenes which are anything like what the eye sees due to it's peripheral vision. If you consider a 22mm lens the effect is as follows -- to be viewed correctly ( i.e. with correct perspective) at 10" distance the print should be 12x8".
    Even at this distance and size, the eye to eye distance of ~ 3" effects perspective and strictly the print should be larger say 24x16 viewed at 20".
    The eye, as opposed to what is claimed, is 5x more capable in resolution
    than the standard of 0.1mm at 10" ( this latter is 250 dpi in the print)
    -- Your standard monitor is 100 dpi seen at 20" which is nearly the same
    and the jagged lines are quite clear ( this is called vernier accuity where the eye discerns a discontiuity in a line ).
    The implication is that the print should really be at least 750 dpi at 12x8
    size -- 54 Mpixels. So even 'professional cameras' do not come close yet.
    In the ideal sense telephoto-shots should be viewed at correspondingly longer distances for correct perspective or the photo printed at smaller scales e.g. a 100mm lens shot should be printed at 3x2" for correct perspective viewed at 10", instead we print at 6x4.
    So my contention is that at that viewing distance (10") we need at least 750 dpi or a total of 13.5 Mpix ( actual printed ) -- if you use 2x digital zoom to achieve this -- this is 1/4 of the frame -- total 54Mpix .
    These things can be checked out using a good drawing program such as Corel Draw or Adobe with native resolutions better than 300 dpi and then printing on a good printer with photo quality paper. A good test picture is simply a set of lines from one corner at 5 degree intervals , on your monitor it will be clear that some lines are particularly bad (100 dpi at 20" or 200dpi at 10" ).
  9. Dec 21, 2004 #8
    Peripheral vision is quite useless in what you believe to be "seeing". In the visual process when viewing static images, it's mainly used to decide where to re-target the high-resolution central vision. The imagery from the periphery is very low-res (orders of magnitude lower res than central vision) and to the best of current knowledge is not directly composited into the image that we see.

    To a good first order approximation, we see through a 3 degree keyhole. For a rectilinear lens, the field of view for a 35 mm frame size is given by
    [tex]\gamma={2\over {\tan\[0.035/(2\cdot f)\]}}[/tex]. So 3 degrees is like looking through a 670mm telephoto on a 35mm camera. That's where all the high-resolution imagery comes from. Everything else is a blur used to decide where to allocate that high-resolution imager next. And that high-resolution imager is a sampled one, and can take maybe 6 samples per second.

    Our brain does all the compositing needed to make us perceive a "continuous" visual reality. It's nowhere near continuous!

    I was talking about an example application in microscopes, where in unsophisticated applications people typically slap an NTSC color camera. And 8 mpix is all you need if you have a rotating color wheel in front of the CCD, which is the next best thing to having a 3 CCD system. Since you're always subsampling the CCD, the increased sampling rate (say 180 half fields vs. 60 half fields) is not a problem with recent enough CCDs. But I was giving just an example in a particular context.

    All the numbers are known and there's no such thing as "claimed". People have been cutting and analyzing the retinas for ages now. It's hardly controversial these days. I don't know who claims 250dpi, but that's just a made-up number. The 750dpi is the correct one.

    In the central vision there are about 130 cones per degree of visual angle. At 10 inches from the eye that gives you the linear cone size of about [tex]35\mu m[/tex]. That's about 750 dpi you're talking about. But such high-resolution vision doesn't even span an inch! It barely spans half an inch at that distance.

    Recoup: The eye's resolution is highly localized. You have a high resolution, sampled (say up to 6 samples/s) and quite non-realtime central vision region which is a cone with about 3 degree apex angle. Everything else is periperal vision, has very low spatial resolution (but very good temporal one!) and is used mainly for realtime processing in the visual system -- for the peripheral visual reflexes (motion retargeting, optokinesis), for determination of "interesting" points in the scene to move the central vision to, etc.

    Last time I checked there are about 6 millions total cones (colour-sensitive detectors) on your average human retina. Out of which a mere 18 thousand are in the rod-free one degree square of the fovea, and about 200 thousand total in the whole fovea (i.e. central vision area). Numbers vary slightlty depending on who ya cite.

    Now, if you had 54 million colour-sensitive pixels on your retina, your optic nerve would be too big to fit anywhere. But of course you don't claim that. Assuming (wrongly!) that your eye doesn't move, you'd need 6 million pixels total in order to present imagery to the fullest potential of the retina, period. But the eyes do move, and we do scan the images that we see and that's where your 54 mpixel figure comes from, and it's correct. But it hides something, read on.

    The main thing is that in most applications we don't really need the full resolution of central vision. Even your IMAX film doesn't have enough resolution to fully potentially exploit your visual input potential, but OTOH if you were the only person in the theatre, more than 95% of the film's resolution is completely and hopelessly wasted at any given time.

    The whole problem comes from big discrepancy between central and periperal vision's resolution. While our eye only sees about 3 degrees FOV in high-resolution, the rest can be very low resolution. But in order to get that, you'd need a very good realtime eye tracker and machinery to generate such a high-resolution insert in the low-resolution image that you're seeing. Technology exists to do just that, but it's a major pain. You need extremely fast data paths and latencies on the order of milliseconds to do everything. I.e. as soon as you detect that the eye has started moving, you need to estimate where it's going, how soon it will get there (saccade dynamics are pretty well known today), and start moving your high-resolution insert in same direction and following roughly the same dynamics. As they eye moves to its final destination, you have to continually update the hi-res insert's position and dynamics. As soon as the eye stops, the saccadic (central vision relocation) system does an acquisition of the image and ignores any further central vision input for about 150-350ms, depending on visual content and reflex overrides that may happen in the meantime. At the time the post-saccadic acquisition is being done, the hi-res insert better be at the right spot and completely still.

    In vision world, things aren't too easy :)

    But alas, we have departed from the original lens topic :)

    Cheers, Kuba
  10. Dec 21, 2004 #9
    To Kuba
    Thanks for that rather complete answer -- and you are right of course but we have not got visually adaptive monitors in the consumer domain yet , and it precisely the eye scanning which gives us the feeling of being inside the picture hence HDTV.
    My complaint is that picture composition at 250dpi at 20 " is NOT good and that camera adverts for even 8 Mpix do not address the problem of monitor viewing ( compared to print)
    On topic
    I did give an example of what was being done in variable focal length lenses using electrostatic control this is at least partly for cell phone cameras -- but in microscopy one may have the luxury of using monochromatic light and doing other things .
    However I was under the impression that good microscopes are only limited by diffraction and zoom aspects are not limited to the objective -- so I was not sure about the original question -- just dumped on your 8 Mpix.
    But I will say it again -- a 22mm lens (35mm equivalent) picture --reproduced full screen on a 17" monitor , can only look correct at 10" and it makes a huge difference
    --- but nobody is going to view a monitor at 10" AND the eye separation is a problem anyhow --- so I want a 34" screen (diagonal ) viewed at a nice comfortable 20-24 " with a resolution of 750 dpi and a camera to match -- dream on.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook