1. The problem statement, all variables and given/known data For x near 0, local linearization gives the following equation. e^x ≈ 1 + x Estimate to one decimal place the magnitude of the error for −1 ≤ x ≤ 1. 2. Relevant equations 3. The attempt at a solution I'm no exactly sure what to do here to be honest, but what I thought I'd do is try to work backwards. Generally I'd be given a margin of error to be accurate within, but instead I am given the values that are accurate within the margin of error(i assume). So what I did was take -1 & 1 and stick it in |e^x - 1+x|. I got 1/e and e-2. I took the difference of the two and got .3504. I'm pretty sure this is wrong I think I messed up when I took 1 & 1 for |e^x - 1+x|.