Hi, I'm attempting to determine the approximate pixel coordinates of cataloged stars within the field of view of a spacecraft camera. I'm given the right ascension and declination of the center of the camera field of view and its dimensions. I've used that information to filter a star catalog in order to determine which stars should appear in the image. I'm having difficulty converting the right ascension and declination of the stars predicted to appear in the field of view into pixel coordinates. I'm using a gnomonic projection to go from celestial coordinates to tangent plane coordinates, described in section 4.3 of this link:http://ugastro.berkeley.edu/infrared10/astrometry/lab3-v3.pdf" [Broken] X = (cos(dec)*sin(ra - ra0)) / (sin(dec)*sin(dec0) + cos(dec)*cos(dec0)*cos(ra - ra0)) Y = (sin(dec)*cos(dec0) - cos(dec)*sin(dec0)*cos(ra - ra0)) / (sin(dec)*sin(dec0) + cos(dec)*cos(dec0)*cos(ra - ra0)) where X and Y are tangent plane coordinates, ra0 and dec0 are the right ascension and declination of the center of the camera's field of view and ra and dec are the right ascension and declination of the star. and then the following transformation from tangent plane to pixel coordinates: x = f * (X/p) + x0; y = f * (Y/p) + y0; where x and y are pixel coordinates, f is the camera focal length, p is the pixel dimension (assuming the pixels are square) and (x0, y0) is the location of the center pixel. In my case, f is 2619mm and p is 0.52mm, and (x0,y0) is (128,128). The detector is 256x256. When I calculate the pixel coordinates, I'm getting results on the order of 103, which clearly does not make sense. Any advice would be much appreciated.