In a CPU, what exactly is a cache ?

  • Thread starter Thread starter The_Absolute
  • Start date Start date
  • Tags Tags
    cpu
Click For Summary

Discussion Overview

The discussion centers on the concept of cache in CPUs, exploring its function, structure, and impact on performance. Participants examine different cache levels (L1, L2, L3, L4) and their implications for processing speed, as well as historical perspectives on cache importance in computing.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Some participants describe cache as a storage area for recently accessed data, which allows for faster retrieval compared to accessing main memory.
  • Others note that larger caches can improve performance, but there is uncertainty about the diminishing returns of cache size increases in modern CPUs.
  • A participant mentions that AMD and Intel may measure cache sizes differently, which could affect comparisons between processors.
  • Some argue that after a certain cache size, further increases may not significantly impact performance, as many applications can fit within smaller caches.
  • One participant introduces the concept of associative memory in cache design, explaining how it facilitates quick address lookups.
  • Another points out that cache is a high-speed, expensive memory type, and its limited size in budget CPUs is a cost-saving measure.

Areas of Agreement / Disagreement

Participants express varying views on the importance of cache size and its impact on performance, indicating that there is no consensus on the current significance of cache in modern CPUs.

Contextual Notes

Some discussions reference historical contexts and personal experiences with cache performance, which may not reflect current technological standards. There are also mentions of specific cache hit times and associativity that are not fully explained.

The_Absolute
Messages
174
Reaction score
0
In a CPU, what exactly is a "cache"?

I was wondering what exactly a "Cache" is in a CPU. Why does having a larger cache improve performance? In the Phenom II and Core i7 processors, I noticed that they have a small amount of L2, and L3 cache, and 8 MB of L4 cache per core. What does this mean?

The Core 2 processors only have 3 MB of L2 cache per core. How much of a performance increase do you get with the larger caches?
 
Computer science news on Phys.org


Cache is where the processor can store things it's recently accessed, so that if it needs them again it will be much faster. As an analogy imagine you had a large warehouse (RAM), finding things in this vast warehouse would be time consuming. However you set up a small shelf at the front door and whenever you used something you put it on this shelf so that if you needed it again you could just grab it from there (cache). This shelf has limited size so you can only keep the stuff you've used very recently on there.

The different levels (L1, L2, L3) are just smaller and faster levels. You could view it as going L1 (fastest, and smallest), L2, L3, RAM, Hard Drive (slowest and largest). The system checks each and wherever it finds the info first is where it gets it from.

I wish I could give you more up to date info on cache sizes. I remember years ago (2000ish) cache was very important, arguably more important than clock speed. It seems like caches have sort of leveled out recently, which leads me to believe they must be losing the benefit from increases in size. Another factor to be aware of is that I believe AMD and Intel measure the caches differently. I think AMD gives the size of independent caches for each core, and Intel give the total size of a combined cache for all the cores. Again I'm a bit fuzzy on the more recent specifics so hopefully someone else will come along and make corrections.
 
Last edited:


DaleSwanson said:
I wish I could give you more up to date info on cache sizes. I remember years ago (2000ish) cache was very important, arguably more important than clock speed. It seems like caches have sort of level out recently, which leads me to believe they must be loosing the benefit from increases in size.
I'm not as up to date on it as I used to be either, but I suppose that after the size gets to a certian point, futher increases don't matter much because certain applications that matter for a computer's speed already fit in the smaller cache. I remember 10 years ago when cache size was a critical factor in applications like SETI @ Home. Either it fit completetly into the L2 cache or it didn't, and the difference in performance from one to the other was vast.
 


A cache also involves "associative" or "content addressable" memory. The lower bits of the address are passed directly through to access the memory, depending on the size of each cache cell, and the upper bits are fed into a set of parallel comparators that look for a match in a single parallel operation, for an near instant address look up. When used for a memory cache, there can be no match or a single match. If there's a match, then that corresponding cache cell is used. If there isn't a match, then RAM (or disk if paged virtual memory) is used. I would assume that each level of cache and virtual memory use independent sets of "content addressable" memory to handle the upper bits of addresses.
 


The cache on your CPU has become a very important part of today's computing. The cache is a very high speed and very expensive piece of memory, which is used to speed up the memory retrieval process. Due to its expensive CPU's come with a relatively small amount of cache compared with the main system memory. Budget CPU's have even less cache, this is the main way that the top processor manufacturers take the cost out of their budget CPU's.

Without the cache memory, every time the CPU requested data it would send a request to the main memory which would then be sent back across the memory bus to the CPU. This is a slow process in computing terms. The idea of the cache is that this extremely fast memory would store and data that is frequently accessed and also if possible the data that is around it. This is to achieve the quickest possible response time to the CPU. Its based on playing the percentages. If a certain piece of data has been requested 5 times before, its likely that this specific piece of data will be required again and so is stored in the cache memory.
 


DaleSwanson said:
I wish I could give you more up to date info on cache sizes.

Here's a short table:

http://www.amd.com/us-en/Processors/ProductInformation/0,,30_118_8796_15225,00.html

In addition to what's on this table, Patterson & Hennessy reveal the Opteron 2300's hit times - 3 cycles, 9 cycles, & 38 cycles - and degree of associativity - 2, 16, & 32-way, respectively. (Useful?) See the "Real Stuff" section of the cache chapter (if you have it). There's probably more details hidden on the manufacturers' websites somewhere...
 

Similar threads

Replies
3
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
Replies
10
Views
4K
Replies
12
Views
4K
Replies
48
Views
5K
  • · Replies 13 ·
Replies
13
Views
13K
Replies
27
Views
5K
Replies
9
Views
7K
  • · Replies 26 ·
Replies
26
Views
6K