Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Data in the CPU cache

  1. Feb 12, 2017 #1

    fluidistic

    User Avatar
    Gold Member

    Hello,
    I wonder what kind of data usually go into the CPU cache. Generally the size of the cache is in kB or MB in contrast with RAM which is in GB.
    I understand that the cpu accesses the RAM much slower than the cache so it's better for it to find the data in the cache instead of in the RAM. But the cache is so small in size I wonder how it can be that useful.
    Is it possible to store a small .txt file into the cache? If so, accessing the .txt should be much faster than accessing it if it was into the RAM; is there an easy way to benchmark the time it takes to "read" this .txt file and thus determine whether the file is indeed into the cache?
    Thank you.
     
  2. jcsd
  3. Feb 12, 2017 #2

    phinds

    User Avatar
    Gold Member
    2016 Award

    One of the main speed advantages of cache is for programs with loops. If you are executing the same few or few dozens of instructions over and over in a loop, your program executes noticeably faster if the instructions are in the cache than if they are in main store.
     
    Last edited: Feb 12, 2017
  4. Feb 12, 2017 #3
    The CPU cache is used for holding variables that the CPU requires access to very frequently.
    As Phinds said this would typically be things such loop counters, or constant pointers to frequently addressed items in main memory.
    At the level of files there is a different kind of cache, whereby some or even all of a file held on disk is copied into RAM.
    Accessing the data that way is significantly faster than continually reading from the disk.
     
  5. Feb 12, 2017 #4

    phinds

    User Avatar
    Gold Member
    2016 Award

    But it is NOT just data. Not just things like loop counters, it is the program code as well as data. It does no good to keep a loop counter in cache if you have to fetch all of the instructions in the loop out of main store every time through the loop.
     
  6. Feb 12, 2017 #5

    StoneTemplePython

    User Avatar
    Gold Member

    A special case answer, albeit a very important one, involves how people optimize matrix multiplication. E.g. look at blocked matrix multiplication. You might enjoy these slides (from 1999 apparently but still quite relevant).

    https://people.eecs.berkeley.edu/~demmel/cs267_Spr99/Lectures/Lect_02_1999b.pdf
     
  7. Feb 13, 2017 #6
    There is something called memory hierarchy in computer architecture. You have the hard drive, the RAM, then the cache. As you go up, the memory level gets smaller, but faster and cost more per storage unit. But with the proper use of these different levels, you will have something called virtual memory, which will be the size of the hard drive, but it's accessed faster than if the CPU would access the hard drive directly. The CPU accesses the cache faster than the RAM, and accesses the RAM faster than the hard drive. There are different algorithms what data to fetch and put in each upper level of this hierarchy. For example, one method says that any given instruction is likely to be executed more than once, so, it will be kept in the cache. Another method says that it's likely that adjacent instructions will be executed, so a block of instruction will be fetch from the RAM to the cache. I vaguely remember this stuff from my bachelor's degree. But I believe the concepts I described is somewhat correct.
     
  8. Feb 14, 2017 #7

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    It should be mentioned that cache is generally for temporary storage. There are different levels of cache, L3, L2, L1 of increasing speeds. L3 feeds L2 feeds L1. L1 is on the microprocessor chip. L2 and L3 can be in the CPU module or on the motherboard. The computer can look ahead and start pulling data in from the slower memory to faster memory before it is needed. So the contents of the different levels of cache are always changing. This is all carefully thought out. Messing with it can cause unexpected and counter-intuitive results.

    EDIT: Changed all misspelled "cash" to "cache"
     
    Last edited: Feb 14, 2017
  9. Feb 14, 2017 #8

    phinds

    User Avatar
    Gold Member
    2016 Award

    Well, I don't know about that. I NEVER use MY money for temporary storage. :smile:
     
  10. Feb 14, 2017 #9

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    AAARRRRRGGGGGHHHHH! Ok. I'm a moron at spelling. I'll correct it.
     
  11. Feb 14, 2017 #10
    I bet it can. I doubt if anyone other than an OS developer could do much about it anyway though.
     
  12. Feb 14, 2017 #11

    phinds

    User Avatar
    Gold Member
    2016 Award

    I don't think cache management is part of the O.S. it is part of the on-board processing, the "computer within the computer" as it were. I'm not 100% sure that that's always the case.
     
  13. Feb 14, 2017 #12

    phinds

    User Avatar
    Gold Member
    2016 Award

    I, of course, niver misspolle anythung.
     
  14. Feb 14, 2017 #13
    I think cache management is part of the architecture of the CPU.
     
  15. Feb 15, 2017 #14
    The/most CPU provides basic cache management, but also there are instructions for cache management (so the code can fetch data into the cache which will be needed in the future) and also: by organizing the data usage to fit with the cache structure the programmer and the compiler (maybe even the OS) can affect the cache efficiency.
     
  16. Feb 15, 2017 #15

    DrClaude

    User Avatar

    Staff: Mentor

    Most of the posts here appear to be from computer scientists, focussing on the caching of instructions. As someone who has simulated big systems, let me tell you that proper cache management is also very important for data. For instance, accessing arrays following their storage order in memory is very important to avoid cache misses, where the data needed is not in cache and must be fetched from main RAM. This is because, as previously said, one of the main assumption for caching algorithms is that the next requested part of memory is most likely near the previously accessed part of memory. Cache misses can severely affect a program efficiency!
     
    Last edited: Feb 15, 2017
  17. Feb 15, 2017 #16

    DrClaude

    User Avatar

    Staff: Mentor

  18. Feb 15, 2017 #17

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    I have seen work in time-critical applications where the cache activity was monitored and successfully optimized. My only first-hand attempt was when I naively tried to optimize and got bad results. In that effort, a large amount of data was being looped through repeatedly. Trying to be smart, I alternated looping forward and backward so that the data last looped through would be the first looped through in the next iteration. To my surprise, that was significantly slower. It taught me a lesson and after that I left it to the compiler.
     
  19. Feb 15, 2017 #18
    Called 'code profiling'. There are various tools to monitor cache usage, branch prediction and many other things.
    Also, many CPUs (and: GPUs also) has HW support for this.
     
  20. Feb 15, 2017 #19

    jbriggs444

    User Avatar
    Science Advisor

    I've been known to temporarily transcribe a phone number to cache on a dollar bill. There is also a cache of cash in my car for tolls, parking, burgers and such.
     
  21. Feb 15, 2017 #20

    phinds

    User Avatar
    Gold Member
    2016 Award

    Yes, that is exactly what I had just said.

    Ah, good to know. I was not aware of that but it makes sense.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Data in the CPU cache
  1. Cache in python? (Replies: 4)

  2. Virtual Caches (Replies: 5)

  3. CPU usage (Replies: 3)

  4. CPU question (Replies: 6)

Loading...