Camera Calibration: Estimate Lens Distortion from 1 Photo

  • Thread starter Thread starter ProTerran
  • Start date Start date
  • Tags Tags
    Calibration Camera
AI Thread Summary
The discussion revolves around estimating lens distortion from a single photo using a reference pattern of uniquely defined dots. The key challenge is determining the undistorted positions of these dots to compute polynomial coefficients for distortion correction. A method involves detecting the shapes of the dots to calculate geometric factors like centroid, which aids in identifying their positions. The GRIP image processing software, designed for astrophotography, is highlighted as a tool that can assist in this calibration process. The software now includes features for correcting lens distortion based on a grid of calibrated dot positions, allowing users to batch process multiple images. The new version of GRIP simplifies the distortion correction process, making it accessible through user-friendly menus and wizards. Feedback on the software's effectiveness is encouraged.
ProTerran
Messages
29
Reaction score
0
Hello,

Here is my problem:
I've created reference pattern for camera calibration. It consist of well defined dots where each dot is unique.
What I'm trying to do is to estimate lens distortion from only one photo. I can easily identify dots on photo, but I have no idea how to estimate the position of dots that are undistorted.
I need undistorted dots in order to compute difference between positions of undistorted and distorted dots which are then used in computing of polynomial coefficients.

Sorry for my bad english.
 
Last edited:
Computer science news on Phys.org
You need to detect each spot as a shape, in terms of its boundary. Then you can calculate geometrical factors such as width, height, perimeter and, most usefully in your case, centroid (ie, the centre of graviity of all the pixels comprising the object.
I do this kind of thing in my own image processor (GRIP) that I wrote for making astrophotographs. The problem I faced there was that I only had a fixed tripod when I started, so the stars moved from one frame to the next due to the Earth's rotation. So for stacking multiple exposures I had to take account of lens distortion. I have made GRIP available for others to use from my own web site: www.grelf.net. It is written in Java and so it is possible for anyone to extend for their own purposes. The API is available on the web too: use the API button on the menu on each of my pages. Particularly look at the class called Blob. A blob is a detected object, described in terms of its boundary and enclosed region.
 
Last edited:
I am currently modifying GRIP so that correcting the lens distortion of a photo will be possible as a menu option, after a reference pattern of the kind I think you mean has been used to calibrate it. A programmer could do it with my API now but I'll make it so a non-programmer can do it.
 
Thank you for your reply.
Can you tell me what kind of method are you going to use for camera distortion calibration?

I came across at two kind of methods. One is based on modeling radial distortion and second is based on fitting user defined polynomial (very rare method). Second method is more general because you can fit any type of function, thus you are not only limited to radial distortion, but it is possible to model any kind of image distortion.

As I mentioned at my previous post, the problem that i have is how to estimate difference between positions of the reference points in undistorted image and distorted image (I only have one image - distorted). One way to solve this problem is by modeling pinhole camera and by knowing the exact position of the camera with respect to the reference pattern and also by knowing intrinsic camera parameters i.e. zoom and focus.
When you have all this informations then it is possible to project reference points and get undistorted image.

But I don't have those informations, so I am looking to solve this problem by making different approach. Assuming that the only informations I have is distorted image and exact position of the points on the reference pattern, I try to find iteratively best fit of the undistorted points on distorted image. After that, I will be able to use least square method to best fit polynomial and obtain map of distortion on whole image.

Hope that what I wrote is clear enough :-)
 
Last edited:
I have nearly finished a new version which does the following. Use an image of a square array of dots, taken with the same optical set-up as you wish to correct. GRIP measures the positions of the dots and creates a regular grid of the same average spacing and orientation. It then knows what second order polynomials to use on a real photo to warp it so the calibrated dot positions move to the regular grid. The grid info is saved in a file so that it can be reapplied to multiple images - in fact it will be possible to batch process a sequence of photos.
This warping is the same as I use for stacking multiple exposures in astrophotography, where GRIP matches star patterns from one frame to the next. It has been available for programmers for several years but I have now made it easily accessible from menus, with wizards to guide you through. I plan to upload the new version of GRIP sometime this week.
 
That is great. Thanks sir.
 
A new version of my GRIP application is now available, including the kind of distortion correction I believe you want. Read carefully my initial description here: http://www.grelf.net/new.html then I hope you will be able to download and use it.
Please give me feedback on whether it does what you need and what may need improving.
 
Back
Top