Optimize Virtual Memory by Normalizing Columns in a 1024x1024 Array

  • Thread starter Thread starter rootX
  • Start date Start date
  • Tags Tags
    Memory Virtual
Click For Summary
SUMMARY

The discussion focuses on optimizing virtual memory when normalizing a 1024x1024 array of 32-bit numbers by columns. The proposed algorithm identifies the maximum value in each column and divides all numbers in that column by the maximum. Storing data by column is recommended to minimize page faults, as processing by rows would result in higher memory usage and increased page faults. The analysis concludes that effective memory management and understanding of cache optimization are crucial for performance enhancement.

PREREQUISITES
  • Understanding of 1024x1024 array structures
  • Knowledge of memory allocation and page faults
  • Familiarity with normalization algorithms
  • Basic principles of cache optimization in DRAM
NEXT STEPS
  • Research memory management techniques in C/C++
  • Learn about page fault handling in operating systems
  • Explore normalization algorithms in data processing
  • Study cache optimization strategies for high-performance computing
USEFUL FOR

Data scientists, software engineers, and system architects interested in optimizing memory usage and performance in large-scale numerical computations.

rootX
Messages
480
Reaction score
4
If there is a 1024x1024 array of 32 bits numbers and we need to normalize by columns.

Algorithm goes through each column, finds max and divide all numbers by the max.
It would be certainly wise to store the pages by column?

My rationale:
1M (2^20) main memory is allocated and each page is 4K bytes as provided.

Because if it is done by rows, 256 rows will be stored. So, for each column there will be 3 page faults when reading numbers and 3 page faults when writing back the normalized numbers.
 
Technology news on Phys.org
You'd need 4MB of memory to hold all the data. Unless other processes were consuming nearly all of you computers memory, then none of that 4MB of data would be paged out to the swap file.

Performance issues would be related to the cache size and algorithm implemented on your computer. Dram is normally optimized for sequential access.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 30 ·
2
Replies
30
Views
7K
  • · Replies 4 ·
Replies
4
Views
12K
  • · Replies 3 ·
Replies
3
Views
6K
Replies
7
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
7
Views
7K
  • · Replies 14 ·
Replies
14
Views
6K
  • · Replies 4 ·
Replies
4
Views
8K
  • · Replies 3 ·
Replies
3
Views
1K