Optimizing Code for Domain-Specific Connections and Data Structures

Click For Summary
The discussion focuses on learning pure assembly language programming for 80x86 architecture, with the intent to create an operating system and interact directly with hardware without using predefined functions. The user expresses a desire to understand assembly deeply, including how to manipulate graphics on a CRT monitor and potentially develop device drivers. They seek recommendations for an assembler that supports their goals and an editor for coding, emphasizing a nostalgic approach reminiscent of early computing eras. The conversation also touches on the challenges of accessing hardware directly due to manufacturer restrictions and the necessity of using BIOS functions for certain operations. Overall, the user is committed to learning assembly language as a hobby, despite the complexities involved.
  • #61
To add to rcgldr's comment, optimization depends more on the domain for practical purposes than anything else.

Optimizing code for getting the best use of "cycles" or "CPU time" is one thing, but a lot of what optimization is about is looking at your domain and seeing if there are some domain specific connections in the code that can be optimized or whether the domain exhibits the potential for data structures and appropriate algorithms that use these to do a task that is quicker than in another implementation that achieves the same thing.

Typically there is a kind of rule of thumb between the use of memory and computational complexity of a task (or algorithm) where the trade-off is that if you sacrifice memory, then the computational complexity increases but if you don't then it decreases.

The best example of this would be to compare a search algorithm with data that had no-overhead vs one that had a lot (i.e. a hash-table).

Some-where in between all of this you have say a binary-tree classification system for records, or even some kind of graph structure to help organize the data but a hash-table is one where if it's a good table with a good hash-algorithm with low collisions (you don't aim to remove collisions, you just aim to make them as uniform as possible) then using memory with the hash-table has a habit of making the speed a lot better and as a "rule of thumb" if you sacrifice less memory, you increase computational complexity.
 

Similar threads

  • · Replies 102 ·
4
Replies
102
Views
1K
Replies
16
Views
3K
  • · Replies 397 ·
14
Replies
397
Views
19K
Replies
6
Views
3K
  • · Replies 25 ·
Replies
25
Views
2K
Replies
86
Views
1K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K