background: I am working with an accelerometer to measure the tilt angle of a mechanical device. The accelerometer reads x, y, z values that can be made into a "gravity vector". The accelerometer can NOT be assumed to mount square with the device. The accelerometer is NOT well calibrated, so the axes of measure are NOT of equal sensitivity. Motion is not an issue as the device is typically sitting still, and motion is rather slow. so far: The accelerometer is read when the device is known to be "level" so that the "Zero Vector" vZero can be set. The values from then on (vCurrent) are compared to vZero and the angle between is calculated. Scalar cross product between vZero and vCurrent is used to find the cosine of the angle, and a lookup table is used to transale into degrees of tilt. The tricky part: the axes are not of equal sensitivity! This is what I need to solve. I need to scale the axes so that scalar cross product works correctly. Two factors are in my favor: 1 - The x axis is practically irrelevent; the tilt is along that axis, so it is only used for minor mounting correction and does not affect the required accuracy. 2 - I am able to use another method to find the cos of the angle of the device, but only during calibration. The question in simple terms: How do i find a scalar to apply to the Y axis of my accelerometer to compensate for inconsistent sensitivity between the Y and X axes given a Zero Vector, a Current Vector and a Current Cosine of the tilt? Since vZero x vCurrent = CurrentCos, then how do i use a correct CurrentCos to find the correct vZeor and vCurrent? next question: did any of this make any sense or I am going to have to try to explain it all again?