How do I determine a camera projection matrix?

I'm an undergraduate computer-science student doing research in the field of computer vision, and one of the tasks I've been charged with is calibrating the camera on a robot.

I understand the basic principles at work: a vector in 3D world coordinates is transformed into homogeneous 2-space through the pinhole model, and camera calibration is supposed to find the parameters that created that transformation. However, I'm a little stumped on the actual application of these ideas.

I'm using the "Camera Calibration Toolbox for Matlab" (http://www.vision.caltech.edu/bouguetj/calib_doc/). I've successfully used the program to analyze a series of images and determined the intrinsic parameters, and I have a set of extrinsic parameters (one for each image I fed into the program); however, I can't figure out how to generate the matrix that transforms the pixel coordinates into real-world coordinates.

If someone could point me in the right direction and tell me where I can learn what I need to know, I would be greatly appreciative.
 
1,233
17
From a theoretical point of view, I don't see how it is possible. If you are projecting a 3D image onto a 2D surface, how can you tell where along the missing dimension to place the information you collect? In other words, how do you get depth perception with only one eye?
 

Related Threads for: How do I determine a camera projection matrix?

  • Posted
Replies
5
Views
2K
  • Posted
Replies
7
Views
5K
  • Posted
Replies
1
Views
6K
Replies
2
Views
2K
  • Posted
Replies
2
Views
513
  • Posted
Replies
2
Views
1K
  • Posted
Replies
2
Views
2K
  • Posted
Replies
2
Views
2K

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top