How do I determine a camera projection matrix?

  • #1
I'm an undergraduate computer-science student doing research in the field of computer vision, and one of the tasks I've been charged with is calibrating the camera on a robot.

I understand the basic principles at work: a vector in 3D world coordinates is transformed into homogeneous 2-space through the pinhole model, and camera calibration is supposed to find the parameters that created that transformation. However, I'm a little stumped on the actual application of these ideas.

I'm using the "Camera Calibration Toolbox for Matlab" (http://www.vision.caltech.edu/bouguetj/calib_doc/). I've successfully used the program to analyze a series of images and determined the intrinsic parameters, and I have a set of extrinsic parameters (one for each image I fed into the program); however, I can't figure out how to generate the matrix that transforms the pixel coordinates into real-world coordinates.

If someone could point me in the right direction and tell me where I can learn what I need to know, I would be greatly appreciative.
 

Answers and Replies

  • #2
1,234
18
From a theoretical point of view, I don't see how it is possible. If you are projecting a 3D image onto a 2D surface, how can you tell where along the missing dimension to place the information you collect? In other words, how do you get depth perception with only one eye?
 

Related Threads on How do I determine a camera projection matrix?

  • Last Post
Replies
5
Views
755
  • Last Post
Replies
7
Views
745
  • Last Post
Replies
2
Views
2K
Replies
11
Views
1K
Replies
7
Views
5K
  • Last Post
Replies
1
Views
7K
Replies
2
Views
5K
Replies
3
Views
3K
Top