Programming determines how well memory hierarchy is utilized

  • Thread starter Thread starter PainterGuy
  • Start date Start date
  • Tags Tags
    Memory Programming
Click For Summary

Discussion Overview

The discussion revolves around the relationship between programming practices and the utilization of memory hierarchy in computer systems. Participants explore how programming affects processing speed, particularly in relation to memory access, compiler efficiency, and optimization strategies. The conversation touches on theoretical and practical aspects of programming, memory management, and performance optimization.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants propose that programming dictates how effectively a memory hierarchy is utilized, influencing overall processing speed.
  • Others argue that while eliminating redundant logical steps can improve efficiency, the primary bottleneck in modern systems is often memory access rather than CPU instruction execution.
  • It is suggested that compiler performance significantly impacts memory utilization, with some participants noting that compilers can optimize memory access patterns.
  • A perspective is introduced that likens memory access to I/O operations, emphasizing the importance of data structure design in optimizing memory performance.
  • Some participants mention that certain CPU instructions can enhance cache behavior, although they may not be effectively utilized by all programmers.
  • There is a recognition that optimization is a complex topic that varies based on the specific software and hardware context, and not solely focused on memory utilization.

Areas of Agreement / Disagreement

Participants generally agree that programming and compiler efficiency play crucial roles in memory utilization, but there is no consensus on the extent of their impact compared to other factors such as data structure design and the nature of the software being executed. Multiple competing views remain regarding the best strategies for optimization.

Contextual Notes

The discussion highlights various assumptions about the relationship between programming, memory access, and performance, but does not resolve the complexities of these interactions. Limitations in understanding the full impact of compiler optimizations and the specific conditions under which different strategies are effective are noted.

Who May Find This Useful

This discussion may be of interest to software developers, computer scientists, and engineers focused on performance optimization, memory management, and compiler design.

PainterGuy
Messages
938
Reaction score
73
TL;DR
a question about memory hierarchy performance
Hi,

Please have a look on the attachment. It says, "In a computer system, the overall processing speed is usually limited by the memory, not the processor. Programming determines how well a particular memory hierarchy is utilized. The goal is to process data at the fastest rate possible."

I understand that programming is actually a set of directives for a computer hardware to take 'logical' steps to get a certain job done. It does make sense that if if there is a useless redundancy in those logical steps then it wastes time as well as storage of a computer system. For example, if there are three positive numbers A, B, and C, and if A=B and B<C, then obviously A<C and you don't even need to make a comparison between A and C. But if the logical step of comparing A and C is taken then it would waste storage as well as processing time.

I'm not sure if what I said above is completely correct, I was only trying to convey how I think of it. I have worked in C++ and many a time, you just write a code and a compiler translates your code into machine language. I'd say that how well a compiler handles its translation job also greatly determines how well a particular memory hierarchy is utilized. Do I have it right? Or, perhaps a compiler translation also plays a role but not as much as the programming itself. Here, I'm think of programming as 'high level language' programming.

Could you please help me to get a better understanding of it? Thanks a lot.
 

Attachments

  • memory_hierarchy1.jpg
    memory_hierarchy1.jpg
    82.4 KB · Views: 488
Technology news on Phys.org
Where does this attachment come from? Can you give a reference? A link?
 
  • Like
Likes   Reactions: PainterGuy
PainterGuy said:
In a computer system, the overall processing speed is usually limited by the memory, not the processor. Programming determines how well a particular memory hierarchy is utilized. The goal is to process data at the fastest rate possible.

What this quote is talking about is the fact that, in modern computer systems, the CPU is faster than the memory is, so any program will spend most of its time waiting on memory to be read from or written to, not waiting for the CPU to compute the next instruction. So there is much more speed gain to be had by optimizing how programs read from and write to memory, as opposed to optimizing how programs execute CPU instructions.

PainterGuy said:
It does make sense that if if there is a useless redundancy in those logical steps then it wastes time as well as storage of a computer system.

This is true, but it's not the kind of efficiency the attachment you give is talking about. Eliminating redundant logical steps will make the program not need to execute as many CPU instructions to accomplish the same goal; but, as above, programs don't spend the majority of their time waiting for the CPU to execute instructions, so the speed gains to be had by eliminating redundant logical steps are limited.

PainterGuy said:
I'd say that how well a compiler handles its translation job also greatly determines how well a particular memory hierarchy is utilized.

Yes, that's true; how a compiler translates source code into machine code can have a huge effect on how efficiently the program reads from and writes to memory.
 
  • Like
Likes   Reactions: sysprog and PainterGuy
A useful perspective on memory utilization is to consider memory access as I/O (input output). Accessing a data structure loaded into main memory is faster than access to intermediate memory and much faster than accessing storage. A similar paradigm applies to operating system performance. Program set instructions loaded and resident perform faster than program sets waiting on a queue and much faster than reading from storage.

If the above is reasonable and accurate, then code/memory optimization depends strongly on data structure -- including variables, objects, and allocations -- choice and design within the available computer architecture.

There are several applicable Insight articles including this tutorial by @QuantumQuest https://www.physicsforums.com/insights/intro-to-data-structures-for-programming/

This article conforms with an I/O model of memory hierarchical optimization with attention to iterative structure placement in code threads. https://suif.stanford.edu/papers/mowry92/subsection3_1_2.html
 
  • Like
Likes   Reactions: QuantumQuest, sysprog and PainterGuy
PeterDonis said:
Where does this attachment come from? Can you give a reference? A link?

Thank you.

It comes from a book Digital Fundamentals by Thomas Floyd.
 
Last edited:
I have included some quotes from a related thread on "cache controller" below.

PeterDonis said:
So there is much more speed gain to be had by optimizing how programs read from and write to memory

PainterGuy said:
So, it might be possible that when a certain program is written, including OS, it is written in such a way to help a microprocessor to coordinate with a cache controller to speed up the action.

Rive said:
In modern CPUs there are usually instructions which can modify cache behavior and trigger certain functionality, but it is hard to use them efficiently. For most programmers they are just kind of 'eyecandy'. Compilers are regularly using them as far as I know, but still most cache management is done by HW.
 
PainterGuy said:
I'd say that how well a compiler handles its translation job also greatly determines how well a particular memory hierarchy is utilized. Do I have it right?

Some programs are IO intensive: some others are memory intensive, some depends on raw calculating capacity. (Many others are just a pile of rubbish.) Optimization - fitting the SW to get the best performance on a given HW - is a difficult topic, always depends on the actual software.
Compilers (and their different optimization levels) do have an effect, but it is not exclusive to memory.
Don't limit the topic only on memory.
 
  • Like
Likes   Reactions: Klystron, PainterGuy and anorlunda

Similar threads

  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 29 ·
Replies
29
Views
4K
Replies
6
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
Replies
7
Views
4K
Replies
59
Views
9K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 122 ·
5
Replies
122
Views
17K
Replies
14
Views
4K