1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution Alright, so far what I know about Digital Signal Processing is that you first sample the values of the signal with infinite precision/infinite amplitude values and of course you can't encode these values, so you need to quantize these values into finite set of amplitude values. So you divide the distance between the max and min of the signal into L zones(L quantification levels) each having a height of Delta. So Delta = (max-min)/L Now my lecture notes go on to rounding and truncation quantization, but these are the only slides given for them(shown above). I've searched online for rounding and truncation quantification and I can't find much. Can anyone explain the graphs to me and how the quantization error is gotten? I can see that with the rounding graph that is flattens at 0 and with the truncation one it doesnt.