Optimize Virtual Memory by Normalizing Columns in a 1024x1024 Array

  • Thread starter Thread starter rootX
  • Start date Start date
  • Tags Tags
    Memory Virtual
AI Thread Summary
Normalizing a 1024x1024 array of 32-bit numbers by columns involves an algorithm that identifies the maximum value in each column and divides all numbers in that column by this maximum. Storing the data by column is recommended to optimize memory usage and minimize page faults. With 1MB of allocated main memory and 4KB page sizes, processing by columns results in fewer page faults compared to processing by rows, which would require more memory and lead to increased page faults. Specifically, processing by rows would result in three page faults for reading and three for writing back normalized numbers, necessitating 4MB of memory. Performance can be affected by cache size and the specific algorithm used, as DRAM is typically optimized for sequential access patterns.
rootX
Messages
478
Reaction score
4
If there is a 1024x1024 array of 32 bits numbers and we need to normalize by columns.

Algorithm goes through each column, finds max and divide all numbers by the max.
It would be certainly wise to store the pages by column?

My rationale:
1M (2^20) main memory is allocated and each page is 4K bytes as provided.

Because if it is done by rows, 256 rows will be stored. So, for each column there will be 3 page faults when reading numbers and 3 page faults when writing back the normalized numbers.
 
Technology news on Phys.org
You'd need 4MB of memory to hold all the data. Unless other processes were consuming nearly all of you computers memory, then none of that 4MB of data would be paged out to the swap file.

Performance issues would be related to the cache size and algorithm implemented on your computer. Dram is normally optimized for sequential access.
 
Dear Peeps I have posted a few questions about programing on this sectio of the PF forum. I want to ask you veterans how you folks learn program in assembly and about computer architecture for the x86 family. In addition to finish learning C, I am also reading the book From bits to Gates to C and Beyond. In the book, it uses the mini LC3 assembly language. I also have books on assembly programming and computer architecture. The few famous ones i have are Computer Organization and...
I have a quick questions. I am going through a book on C programming on my own. Afterwards, I plan to go through something call data structures and algorithms on my own also in C. I also need to learn C++, Matlab and for personal interest Haskell. For the two topic of data structures and algorithms, I understand there are standard ones across all programming languages. After learning it through C, what would be the biggest issue when trying to implement the same data...

Similar threads

Replies
30
Views
6K
Replies
4
Views
11K
Replies
3
Views
6K
Replies
5
Views
3K
Replies
3
Views
1K
Replies
14
Views
5K
Back
Top