(Mods, I posted a similar thread in the computer science forum but now realise that this is a more suitable place for it. Could you please remove said thread from the other forum) I've attached a photo below of the example. 0.2 is the number that we're trying to approximate as a floating point. Fl(x) is said number. |fl(x) - 0.2| = the round off error. The lecturer jumps to a point from the above equation to |-1 + (0.1001.....)2| x2^(-52) x2(-3). Could somebody explain how he made this jump?