This is not an all or nothing choice. What I and others in this thread are saying is that while technology is useful after some basic concepts have been mastered, you have to be aware of its limitations. Being able to do 360 billion arithmetic operations per second is of no use if most of them are wrong. For a more modern example that doesn't rely on a bug in a particular 24-year-old CPU, consider this: Code (C): float sum = 0.0; for (int i = 0; i < 20; i++) sum = sum + .05; if (sum == 1.0) printf("Success\n"); else printf("Failure\n"); Here we are adding .05 to itself 20 times, so the result should be 1. After only 20 additions (orders of magnitudes fewer than 360 billion!), the output is "Failure" because sum is measurably larger than 1.0. On my Pentium i7 system running Visual Studio 2015, sum ends up at 1.00000012. The same code will produce similar incorrect results on other architectures and other compilers.