chiro
Homework Helper
- 4,817
- 134
To add to rcgldr's comment, optimization depends more on the domain for practical purposes than anything else.
Optimizing code for getting the best use of "cycles" or "CPU time" is one thing, but a lot of what optimization is about is looking at your domain and seeing if there are some domain specific connections in the code that can be optimized or whether the domain exhibits the potential for data structures and appropriate algorithms that use these to do a task that is quicker than in another implementation that achieves the same thing.
Typically there is a kind of rule of thumb between the use of memory and computational complexity of a task (or algorithm) where the trade-off is that if you sacrifice memory, then the computational complexity increases but if you don't then it decreases.
The best example of this would be to compare a search algorithm with data that had no-overhead vs one that had a lot (i.e. a hash-table).
Some-where in between all of this you have say a binary-tree classification system for records, or even some kind of graph structure to help organize the data but a hash-table is one where if it's a good table with a good hash-algorithm with low collisions (you don't aim to remove collisions, you just aim to make them as uniform as possible) then using memory with the hash-table has a habit of making the speed a lot better and as a "rule of thumb" if you sacrifice less memory, you increase computational complexity.
Optimizing code for getting the best use of "cycles" or "CPU time" is one thing, but a lot of what optimization is about is looking at your domain and seeing if there are some domain specific connections in the code that can be optimized or whether the domain exhibits the potential for data structures and appropriate algorithms that use these to do a task that is quicker than in another implementation that achieves the same thing.
Typically there is a kind of rule of thumb between the use of memory and computational complexity of a task (or algorithm) where the trade-off is that if you sacrifice memory, then the computational complexity increases but if you don't then it decreases.
The best example of this would be to compare a search algorithm with data that had no-overhead vs one that had a lot (i.e. a hash-table).
Some-where in between all of this you have say a binary-tree classification system for records, or even some kind of graph structure to help organize the data but a hash-table is one where if it's a good table with a good hash-algorithm with low collisions (you don't aim to remove collisions, you just aim to make them as uniform as possible) then using memory with the hash-table has a habit of making the speed a lot better and as a "rule of thumb" if you sacrifice less memory, you increase computational complexity.