Code optimization and algorithm selection are two separate things. Code optimization is the next stage after algorithmic selection and its more machine specific. For example, suppose we select the following algorithm to find a null in a string:That's a good programmer. Even better programmers realize that someone has to maintain the code. If you want to be a top-notch programmer, you will write code that is maintainable so that you can attack other interesting problems. If you make your code so overly optimized that only you can understand it, you are the one who will be stuck maintaining it.
The second Project Euler problem is a good example. There are only eleven terms to be summed. A brute force algorithm will work quite fine here. A slight tweak that recognizes that the even elements of the Fibonacci sequence are separated by two odd elements is something that a maintenance programmer could understand. On the other hand, recognizing that the desired sum is itself closely related to the Fibonacci sequence and optimizing for that is something that a maintenance programmer will foul up.
The desired sum of the even Fibonacci terms up to and including N can be expressed rather compactly as
For example, with N=4000000,Code:
Do you really want to do that?
while (strPtr[location] != 0)
The above algorithm is O(n) and that's about as good as your going to get; however, the algorithm itself can be optimized for the machine itself.
Here is an example of the above algorithm optimized for SSE2:
Sometimes code optimization is necessary like in some embedded systems; however, good algorithm selection is always desired. Suppose, for example, we need to sort an array of a million randomly generated integers. There is a really big difference in using an insertion sort and a heap sort. An insertion sort has O(n^2) complexity whereas the heap sort has O(nlgn) complexity. In other words, the selection sort algorithm will require 1,000,000^2 = 1,000,000,000,000 or 1 trillion operations. But the heap sort can do it in 1,000,000 ln1,000,000 = 13,815,511 operations.
Also there is a lemma here... A highly optimized o(n^2) algorithm will never ever out perform a regular old non-optimized o(nlgn) algorithm.