Programming determines how well memory hierarchy is utilized

  • Thread starter Thread starter PainterGuy
  • Start date Start date
  • Tags Tags
    Memory Programming
Click For Summary
In modern computer systems, the overall processing speed is often constrained by memory access rather than CPU performance, making efficient memory utilization crucial. Programming plays a significant role in optimizing how data is read from and written to memory, with redundant logical steps potentially wasting both time and storage. The effectiveness of a compiler in translating high-level code into machine language also impacts memory hierarchy utilization, as it influences how efficiently programs interact with memory. Additionally, the choice and design of data structures are vital for optimizing code and memory performance within the architecture. Overall, achieving optimal performance requires a comprehensive understanding of both programming practices and compiler capabilities.
PainterGuy
Messages
938
Reaction score
73
TL;DR
a question about memory hierarchy performance
Hi,

Please have a look on the attachment. It says, "In a computer system, the overall processing speed is usually limited by the memory, not the processor. Programming determines how well a particular memory hierarchy is utilized. The goal is to process data at the fastest rate possible."

I understand that programming is actually a set of directives for a computer hardware to take 'logical' steps to get a certain job done. It does make sense that if if there is a useless redundancy in those logical steps then it wastes time as well as storage of a computer system. For example, if there are three positive numbers A, B, and C, and if A=B and B<C, then obviously A<C and you don't even need to make a comparison between A and C. But if the logical step of comparing A and C is taken then it would waste storage as well as processing time.

I'm not sure if what I said above is completely correct, I was only trying to convey how I think of it. I have worked in C++ and many a time, you just write a code and a compiler translates your code into machine language. I'd say that how well a compiler handles its translation job also greatly determines how well a particular memory hierarchy is utilized. Do I have it right? Or, perhaps a compiler translation also plays a role but not as much as the programming itself. Here, I'm think of programming as 'high level language' programming.

Could you please help me to get a better understanding of it? Thanks a lot.
 

Attachments

  • memory_hierarchy1.jpg
    memory_hierarchy1.jpg
    82.4 KB · Views: 469
Technology news on Phys.org
Where does this attachment come from? Can you give a reference? A link?
 
  • Like
Likes PainterGuy
PainterGuy said:
In a computer system, the overall processing speed is usually limited by the memory, not the processor. Programming determines how well a particular memory hierarchy is utilized. The goal is to process data at the fastest rate possible.

What this quote is talking about is the fact that, in modern computer systems, the CPU is faster than the memory is, so any program will spend most of its time waiting on memory to be read from or written to, not waiting for the CPU to compute the next instruction. So there is much more speed gain to be had by optimizing how programs read from and write to memory, as opposed to optimizing how programs execute CPU instructions.

PainterGuy said:
It does make sense that if if there is a useless redundancy in those logical steps then it wastes time as well as storage of a computer system.

This is true, but it's not the kind of efficiency the attachment you give is talking about. Eliminating redundant logical steps will make the program not need to execute as many CPU instructions to accomplish the same goal; but, as above, programs don't spend the majority of their time waiting for the CPU to execute instructions, so the speed gains to be had by eliminating redundant logical steps are limited.

PainterGuy said:
I'd say that how well a compiler handles its translation job also greatly determines how well a particular memory hierarchy is utilized.

Yes, that's true; how a compiler translates source code into machine code can have a huge effect on how efficiently the program reads from and writes to memory.
 
  • Like
Likes sysprog and PainterGuy
A useful perspective on memory utilization is to consider memory access as I/O (input output). Accessing a data structure loaded into main memory is faster than access to intermediate memory and much faster than accessing storage. A similar paradigm applies to operating system performance. Program set instructions loaded and resident perform faster than program sets waiting on a queue and much faster than reading from storage.

If the above is reasonable and accurate, then code/memory optimization depends strongly on data structure -- including variables, objects, and allocations -- choice and design within the available computer architecture.

There are several applicable Insight articles including this tutorial by @QuantumQuest https://www.physicsforums.com/insights/intro-to-data-structures-for-programming/

This article conforms with an I/O model of memory hierarchical optimization with attention to iterative structure placement in code threads. https://suif.stanford.edu/papers/mowry92/subsection3_1_2.html
 
  • Like
Likes QuantumQuest, sysprog and PainterGuy
PeterDonis said:
Where does this attachment come from? Can you give a reference? A link?

Thank you.

It comes from a book Digital Fundamentals by Thomas Floyd.
 
Last edited:
I have included some quotes from a related thread on "cache controller" below.

PeterDonis said:
So there is much more speed gain to be had by optimizing how programs read from and write to memory

PainterGuy said:
So, it might be possible that when a certain program is written, including OS, it is written in such a way to help a microprocessor to coordinate with a cache controller to speed up the action.

Rive said:
In modern CPUs there are usually instructions which can modify cache behavior and trigger certain functionality, but it is hard to use them efficiently. For most programmers they are just kind of 'eyecandy'. Compilers are regularly using them as far as I know, but still most cache management is done by HW.
 
PainterGuy said:
I'd say that how well a compiler handles its translation job also greatly determines how well a particular memory hierarchy is utilized. Do I have it right?

Some programs are IO intensive: some others are memory intensive, some depends on raw calculating capacity. (Many others are just a pile of rubbish.) Optimization - fitting the SW to get the best performance on a given HW - is a difficult topic, always depends on the actual software.
Compilers (and their different optimization levels) do have an effect, but it is not exclusive to memory.
Don't limit the topic only on memory.
 
  • Like
Likes Klystron, PainterGuy and anorlunda

Similar threads

  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 29 ·
Replies
29
Views
3K
Replies
6
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
7
Views
3K
Replies
59
Views
8K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 122 ·
5
Replies
122
Views
16K
Replies
14
Views
3K