1. The problem statement, all variables and given/known data Question is determine the maximum amount by which the Analogue to digital converter output can differ from the analogue input when a 8-bit Analogue to digital converter has a full scale input of 2.55 V. It has a specified error of 0.1% full - Scale. 2. Relevant equations % resolution = step size/ full scale *100% 3. The attempt at a solution I just did 0.1% * 2.55 to get the specified error which was 2.5*10^-3. Then I calculated the resolution which was (2.55/2^8-1)*100% = 0.01. So for each resolution the error was 2.5*10^-3. Thus, maximum amount by which it could differ was 0.01*2.5*10^-3 = 2.5*10^-5. Is this correct at all? Please help.....!