Optimizing Code for Domain-Specific Connections and Data Structures

Click For Summary
SUMMARY

The discussion focuses on the desire to learn pure assembly language programming targeting the 80x86 architecture without relying on predefined functions. The user aims to develop skills to interact directly with hardware components, such as CRT monitors, and potentially create an operating system from scratch. Key tools mentioned include NASM, FASM, MASM, and TASM, with a specific interest in using an assembler that allows for low-level programming without OS constraints. The user also seeks guidance on building a home-assembled computer and understanding the role of BIOS in hardware interaction.

PREREQUISITES
  • Understanding of 80x86 architecture
  • Familiarity with assembly language programming concepts
  • Knowledge of BIOS function calls and hardware interaction
  • Basic computer assembly skills
NEXT STEPS
  • Research NASM and its capabilities for pure assembly programming
  • Explore FASM for low-level programming without predefined functions
  • Learn about BIOS calls for hardware interaction in assembly language
  • Investigate single-board computer kits for hands-on assembly language practice
USEFUL FOR

Individuals interested in low-level programming, computer architecture enthusiasts, and hobbyists looking to learn assembly language for hardware interaction and operating system development.

  • #61
To add to rcgldr's comment, optimization depends more on the domain for practical purposes than anything else.

Optimizing code for getting the best use of "cycles" or "CPU time" is one thing, but a lot of what optimization is about is looking at your domain and seeing if there are some domain specific connections in the code that can be optimized or whether the domain exhibits the potential for data structures and appropriate algorithms that use these to do a task that is quicker than in another implementation that achieves the same thing.

Typically there is a kind of rule of thumb between the use of memory and computational complexity of a task (or algorithm) where the trade-off is that if you sacrifice memory, then the computational complexity increases but if you don't then it decreases.

The best example of this would be to compare a search algorithm with data that had no-overhead vs one that had a lot (i.e. a hash-table).

Some-where in between all of this you have say a binary-tree classification system for records, or even some kind of graph structure to help organize the data but a hash-table is one where if it's a good table with a good hash-algorithm with low collisions (you don't aim to remove collisions, you just aim to make them as uniform as possible) then using memory with the hash-table has a habit of making the speed a lot better and as a "rule of thumb" if you sacrifice less memory, you increase computational complexity.
 

Similar threads

  • · Replies 102 ·
4
Replies
102
Views
2K
Replies
16
Views
4K
  • · Replies 397 ·
14
Replies
397
Views
20K
Replies
6
Views
3K
  • · Replies 25 ·
Replies
25
Views
2K
Replies
86
Views
2K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K