I Interferometry (testing of optics)

  • Thread starter Doc
  • Start date

Doc

43
4
Hi all,

I'm a mechanical engineer who has been dumped into optical metrology at work without anybody much more knowledgeable than myself to help me out. A previous mentor who left recently (who was our optical expert) always told me when measuring wavefront error of optics to "tilt-out the fringes" before taking measurements with the interferometer. I have been doing this for some recent measurements: getting rid of nearly all of the fringes bar maybe one. However, yesterday a colleague thought it would be better to leave some fringes in, maybe five to six. He didn't know why this was a good idea, and neither do I.

I was curious and did two measurements of an elliptical flat. One measurement I did after tilting out nearly all of the fringes. A second measurement I did after adding fringes back in, maybe around ten. The rms wavefront error measured is approximately the same (after the software subtracts off the tilt).

My question is, 'are more or less fringes better' for this type of measurement? I have seen measurement reports from vendors who sent us the optics and interferograms on their reports have roughly five fringes.

Please help!
 

Attachments

Ibix

Science Advisor
Insights Author
4,901
3,219
My sympathies - I've been in the position of being promoted to expert because the actual experts left a couple of times. It's slightly nerve-wracking.

In a manual setup, I was taught to leave in a few fringes because it's easier for humans. For example, if you've got a plane wave propagating down two arms and you want to zero the path difference then you are looking for the fringe with the strongest contrast. If you have a setup with no tilt, you have to figure out whether the current black screen is blacker than the last black screen. This is difficult for humans. If you look at three or four lines, though, spotting the darkest one is more manageable. And if you have a non-plane wave you'll find that with a bit of tilt then the fringes often curve one way one side of equal path length and the other way the other side, and spotting the place where the lines switch from one curve to the other is easier than looking for the largest andmost uniformly dark central spot. And when you adjust something you have more of a chance of counting accurately how many lines went across your visual field than how many times your visual field blinked bright and dark.

All of that might be why your colleague feels like lines would be better. It might also be why you get interferograms looking like that - so you can eyeball the results yourself.

However, I get the impression you have a computer driven system. I rather suspect that none of these constraints apply to that. Or rather, different constraints apply, and the best approach would depend on how the software analyses the interferograms. I must say my experience with interferometry is mostly in the classroom setting, and we used manual kit to learn on (also, computer controlled kit was out of the university's price range at the time). It might be worth investigating if your instrumentation manufacturer has a training program. Or seeing if you can buy your former mentor a beer one day?
 

Andy Resnick

Science Advisor
Education Advisor
Insights Author
7,213
1,544
Hi all,

I'm a mechanical engineer who has been dumped into optical metrology at work without anybody much more knowledgeable than myself to help me out. A previous mentor who left recently (who was our optical expert) always told me when measuring wavefront error of optics to "tilt-out the fringes" before taking measurements with the interferometer. I have been doing this for some recent measurements: getting rid of nearly all of the fringes bar maybe one. However, yesterday a colleague thought it would be better to leave some fringes in, maybe five to six. He didn't know why this was a good idea, and neither do I.

I was curious and did two measurements of an elliptical flat. One measurement I did after tilting out nearly all of the fringes. A second measurement I did after adding fringes back in, maybe around ten. The rms wavefront error measured is approximately the same (after the software subtracts off the tilt).

My question is, 'are more or less fringes better' for this type of measurement? I have seen measurement reports from vendors who sent us the optics and interferograms on their reports have roughly five fringes.

Please help!
'Tilt' wavefront error does not result in degraded images; this is why the software automatically subtracts the tilt. One reason why it's best to leave some residual tilt is numerical accuracy and precision- most likely, the software decomposes the interferogram into Zernike polynomials. If you manually remove all of the tilt, then noise can substantially contribute to the polynomial decomposition, resulting in a poor fit. Too much tilt will likewise result in increased noise (once the software subtracts the tilt out), so a 'best practice' is to leave some tilt in the optic under test.

I highly recommend that you get a copy of Malacara's book "Optical Shop Testing"- it's an indispensable reference for this kind of work.

Does that help?
 

Doc

43
4
My sympathies - I've been in the position of being promoted to expert because the actual experts left a couple of times. It's slightly nerve-wracking.

In a manual setup, I was taught to leave in a few fringes because it's easier for humans. For example, if you've got a plane wave propagating down two arms and you want to zero the path difference then you are looking for the fringe with the strongest contrast. If you have a setup with no tilt, you have to figure out whether the current black screen is blacker than the last black screen. This is difficult for humans. If you look at three or four lines, though, spotting the darkest one is more manageable. And if you have a non-plane wave you'll find that with a bit of tilt then the fringes often curve one way one side of equal path length and the other way the other side, and spotting the place where the lines switch from one curve to the other is easier than looking for the largest andmost uniformly dark central spot. And when you adjust something you have more of a chance of counting accurately how many lines went across your visual field than how many times your visual field blinked bright and dark.

All of that might be why your colleague feels like lines would be better. It might also be why you get interferograms looking like that - so you can eyeball the results yourself.

However, I get the impression you have a computer driven system. I rather suspect that none of these constraints apply to that. Or rather, different constraints apply, and the best approach would depend on how the software analyses the interferograms. I must say my experience with interferometry is mostly in the classroom setting, and we used manual kit to learn on (also, computer controlled kit was out of the university's price range at the time). It might be worth investigating if your instrumentation manufacturer has a training program. Or seeing if you can buy your former mentor a beer one day?
Thanks that all makes sense. Unfortunately the mentor has left the country for work, I have his email but I just didn't want to pester him too much regarding stuff like this.

Thanks again for the response!
 

Doc

43
4
'Tilt' wavefront error does not result in degraded images; this is why the software automatically subtracts the tilt. One reason why it's best to leave some residual tilt is numerical accuracy and precision- most likely, the software decomposes the interferogram into Zernike polynomials. If you manually remove all of the tilt, then noise can substantially contribute to the polynomial decomposition, resulting in a poor fit. Too much tilt will likewise result in increased noise (once the software subtracts the tilt out), so a 'best practice' is to leave some tilt in the optic under test.
Yes the software does decompose the interferogram into Zernikes.

Am I, in effect, taking signal out of my data by removing the tilt and consequently making noise more prominent?

I highly recommend that you get a copy of Malacara's book "Optical Shop Testing"- it's an indispensable reference for this kind of work.

Does that help?
Yes! I have that very book on my desk and have been reading through it this week. Like you say I have found it incredibly helpful, but I couldn't really find an answer to this particular question in my readings.

Thanks for the help!
 

Andy Resnick

Science Advisor
Education Advisor
Insights Author
7,213
1,544
Yes the software does decompose the interferogram into Zernikes.

Am I, in effect, taking signal out of my data by removing the tilt and consequently making noise more prominent?
I think that's a reasonable way to think about the issue, definitely. Glad you find Malacara's book helpful!
 

Want to reply to this thread?

"Interferometry (testing of optics)" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top