Interferometry (testing of optics)

Click For Summary
The discussion centers on the optimal number of fringes to retain when measuring wavefront error in optical metrology using an interferometer. It was noted that leaving some fringes can enhance numerical accuracy and precision, as excessive removal may introduce noise into the polynomial decomposition of the data. The software used for analysis typically decomposes the interferogram into Zernike polynomials, and maintaining a slight tilt can improve the fit. Manual setups benefit from a few fringes for easier visual identification, while computer-driven systems may have different requirements. Overall, balancing fringe retention is crucial for accurate measurements and minimizing noise.
Doc
Messages
47
Reaction score
4
Hi all,

I'm a mechanical engineer who has been dumped into optical metrology at work without anybody much more knowledgeable than myself to help me out. A previous mentor who left recently (who was our optical expert) always told me when measuring wavefront error of optics to "tilt-out the fringes" before taking measurements with the interferometer. I have been doing this for some recent measurements: getting rid of nearly all of the fringes bar maybe one. However, yesterday a colleague thought it would be better to leave some fringes in, maybe five to six. He didn't know why this was a good idea, and neither do I.

I was curious and did two measurements of an elliptical flat. One measurement I did after tilting out nearly all of the fringes. A second measurement I did after adding fringes back in, maybe around ten. The rms wavefront error measured is approximately the same (after the software subtracts off the tilt).

My question is, 'are more or less fringes better' for this type of measurement? I have seen measurement reports from vendors who sent us the optics and interferograms on their reports have roughly five fringes.

Please help!
 

Attachments

  • interferograms.png
    interferograms.png
    118.3 KB · Views: 345
  • layout.png
    layout.png
    2.3 KB · Views: 348
Science news on Phys.org
My sympathies - I've been in the position of being promoted to expert because the actual experts left a couple of times. It's slightly nerve-wracking.

In a manual setup, I was taught to leave in a few fringes because it's easier for humans. For example, if you've got a plane wave propagating down two arms and you want to zero the path difference then you are looking for the fringe with the strongest contrast. If you have a setup with no tilt, you have to figure out whether the current black screen is blacker than the last black screen. This is difficult for humans. If you look at three or four lines, though, spotting the darkest one is more manageable. And if you have a non-plane wave you'll find that with a bit of tilt then the fringes often curve one way one side of equal path length and the other way the other side, and spotting the place where the lines switch from one curve to the other is easier than looking for the largest andmost uniformly dark central spot. And when you adjust something you have more of a chance of counting accurately how many lines went across your visual field than how many times your visual field blinked bright and dark.

All of that might be why your colleague feels like lines would be better. It might also be why you get interferograms looking like that - so you can eyeball the results yourself.

However, I get the impression you have a computer driven system. I rather suspect that none of these constraints apply to that. Or rather, different constraints apply, and the best approach would depend on how the software analyses the interferograms. I must say my experience with interferometry is mostly in the classroom setting, and we used manual kit to learn on (also, computer controlled kit was out of the university's price range at the time). It might be worth investigating if your instrumentation manufacturer has a training program. Or seeing if you can buy your former mentor a beer one day?
 
Doc said:
Hi all,

I'm a mechanical engineer who has been dumped into optical metrology at work without anybody much more knowledgeable than myself to help me out. A previous mentor who left recently (who was our optical expert) always told me when measuring wavefront error of optics to "tilt-out the fringes" before taking measurements with the interferometer. I have been doing this for some recent measurements: getting rid of nearly all of the fringes bar maybe one. However, yesterday a colleague thought it would be better to leave some fringes in, maybe five to six. He didn't know why this was a good idea, and neither do I.

I was curious and did two measurements of an elliptical flat. One measurement I did after tilting out nearly all of the fringes. A second measurement I did after adding fringes back in, maybe around ten. The rms wavefront error measured is approximately the same (after the software subtracts off the tilt).

My question is, 'are more or less fringes better' for this type of measurement? I have seen measurement reports from vendors who sent us the optics and interferograms on their reports have roughly five fringes.

Please help!

'Tilt' wavefront error does not result in degraded images; this is why the software automatically subtracts the tilt. One reason why it's best to leave some residual tilt is numerical accuracy and precision- most likely, the software decomposes the interferogram into Zernike polynomials. If you manually remove all of the tilt, then noise can substantially contribute to the polynomial decomposition, resulting in a poor fit. Too much tilt will likewise result in increased noise (once the software subtracts the tilt out), so a 'best practice' is to leave some tilt in the optic under test.

I highly recommend that you get a copy of Malacara's book "Optical Shop Testing"- it's an indispensable reference for this kind of work.

Does that help?
 
  • Like
Likes BvU and Ibix
Ibix said:
My sympathies - I've been in the position of being promoted to expert because the actual experts left a couple of times. It's slightly nerve-wracking.

In a manual setup, I was taught to leave in a few fringes because it's easier for humans. For example, if you've got a plane wave propagating down two arms and you want to zero the path difference then you are looking for the fringe with the strongest contrast. If you have a setup with no tilt, you have to figure out whether the current black screen is blacker than the last black screen. This is difficult for humans. If you look at three or four lines, though, spotting the darkest one is more manageable. And if you have a non-plane wave you'll find that with a bit of tilt then the fringes often curve one way one side of equal path length and the other way the other side, and spotting the place where the lines switch from one curve to the other is easier than looking for the largest andmost uniformly dark central spot. And when you adjust something you have more of a chance of counting accurately how many lines went across your visual field than how many times your visual field blinked bright and dark.

All of that might be why your colleague feels like lines would be better. It might also be why you get interferograms looking like that - so you can eyeball the results yourself.

However, I get the impression you have a computer driven system. I rather suspect that none of these constraints apply to that. Or rather, different constraints apply, and the best approach would depend on how the software analyses the interferograms. I must say my experience with interferometry is mostly in the classroom setting, and we used manual kit to learn on (also, computer controlled kit was out of the university's price range at the time). It might be worth investigating if your instrumentation manufacturer has a training program. Or seeing if you can buy your former mentor a beer one day?

Thanks that all makes sense. Unfortunately the mentor has left the country for work, I have his email but I just didn't want to pester him too much regarding stuff like this.

Thanks again for the response!
 
Andy Resnick said:
'Tilt' wavefront error does not result in degraded images; this is why the software automatically subtracts the tilt. One reason why it's best to leave some residual tilt is numerical accuracy and precision- most likely, the software decomposes the interferogram into Zernike polynomials. If you manually remove all of the tilt, then noise can substantially contribute to the polynomial decomposition, resulting in a poor fit. Too much tilt will likewise result in increased noise (once the software subtracts the tilt out), so a 'best practice' is to leave some tilt in the optic under test.

Yes the software does decompose the interferogram into Zernikes.

Am I, in effect, taking signal out of my data by removing the tilt and consequently making noise more prominent?

Andy Resnick said:
I highly recommend that you get a copy of Malacara's book "Optical Shop Testing"- it's an indispensable reference for this kind of work.

Does that help?

Yes! I have that very book on my desk and have been reading through it this week. Like you say I have found it incredibly helpful, but I couldn't really find an answer to this particular question in my readings.

Thanks for the help!
 
Doc said:
Yes the software does decompose the interferogram into Zernikes.

Am I, in effect, taking signal out of my data by removing the tilt and consequently making noise more prominent?

I think that's a reasonable way to think about the issue, definitely. Glad you find Malacara's book helpful!
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
5K
Replies
4
Views
3K
  • · Replies 9 ·
Replies
9
Views
2K
Replies
5
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K