It is a trick question in a sense. Just throwing numbers at a typical computer implementation of (1+1/n)^n won't work. The problem lies in how computers represent real numbers.
Suppose one uses a program like Excel or writes a simple program to calculate (1+1/n)^n (I did just this with Excel, perl, and C on a Mac and on a Linux box.) The error shrinks as n increases up until n=10^7 or so. At this point the error stops shrinking. The problem is that the computer performs the calculation (1+1/n)^n as \exp(n*\log(1+1/n))[/tex]. For large n, log(1+1/n)\approx(1+1/n)-1, and this will err from 1/n using off-the-shelf double-precision numbers. For sufficiently large n (10<sup>16</sup> or so), (1+1/n)-1=0 using off-the-shelf double-precision numbers!<br />
<br />
A brute-force way to overcome this problem is to program using an extended precision package. An even better way to overcome the problem is to approach it analytically. A simple error analysis of (1+1/n)^n-e yields a value of n above which the error will be smaller than 1e-9. I am not revealing this value in the chance that this thread is asking us to solve a homework problem.<br />
<br />
Since I have a "<a href="https://www.physicsforums.com/showthread.php?t=206096"" class="link link--internal">https://www.physicsforums.com/showthread.php?t=206096"</a>", I of course had to double-check this limit using an extended precision math package. Ta-da, it works as predicted.