Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Trouble understanding Virtual cache

  1. Apr 22, 2014 #1
    Trouble understanding "Virtual cache"


    I'm having trouble understanding what virtual cache actually means.

    I understand that to get a cache hit (physical cache) you index to the cache set and compare all the tags in that set with the tag in the cache address you are looking for.

    I also understand how the memory management unit and the Transition look-aside buffer work for virtual memory. Page tables map, virtual addresses of each process, to physical addressees in memory.

    But what does virtual cache mean? I mean if you give physical cache addresses a virtual address what are you achieving?
    And also do virtual cache addresses get mapped using the same MMU as real memory?

    I've read Wikipedia and such but I just can't understand why this would be useful.

  2. jcsd
  3. Apr 22, 2014 #2


    User Avatar
    Homework Helper

    For most processors, the cache addresses are virtual addresses. If the data resides in cache, there's no need to obtain a physical address for that cached data. The conversion from virtual to physical is performed via descriptor tables outside of any cache operations.

    On a side note, for memory mapped I/O like DMA that uses physical addresses, the pages of memory for that I/O need to be loaded and locked, then their addresses translated from virtual to physical for the memory mapped I/O to take place. For Windows this is done via a function called MmProbeAndLockPages().
  4. Apr 23, 2014 #3
    Ok thanks, so if the processor wants to access memory, it sends the virtual address to the mmu and the cache which then works out if it resides in cache (it always resides in ram also)?
    And if it resides in cache does it then go to the cache using a physical address (set no. , tag and offset) or does it access the cache in some other way? If so then how does physical cache differ?

    Basically I can't see the difference between physical and virtual cache.
  5. Apr 23, 2014 #4


    User Avatar
    Homework Helper

    As mentioned in my last post, for most processors, if there is a cache hit, then no physical address mapping takes place, since there's no need to do this if the desired data is in the cache. How the rest of this is done for non-cache hits depends on the processor. X86 proccessors use a translation look aside buffer cache which is a type of content addressable memory (fully associative) to translate a sub-set of possible virtual addresses to physical addresses. If not in the table, then the descriptor tables that reside in ram are used. If the desired pages are currently not in RAM, then the operating system swaps out pages if needed and swaps in pages from the swap file. The descriptor tables are updated to reflect the updated swapped pages, and the TLB is updated to remove any swapped out pages (it may just be cleared, I don't know if it's possible to partiallly clear out the TLB).

    There may be some processors that map virtual to physical addresses first, then use a cache based on physical addresses, but the mapping step would normally increase the overhead, so I'm not sure if any currently produced processors do this.
  6. Apr 24, 2014 #5
    There is (almost) no difference between those caches. The differences are in what's around them. For caches that are keyed by physical addresses, virtual to physical address translation must always take place before addressing the cache. Even if that is very efficient, it is still extra processing, which requires either more time or more silicon, or both. Caches keyed by virtual addresses eliminate this need, and the virtual to physical address translation is done only when there a real need to access memory.

    Now the "almost" bit. Virtual address caches are somewhat more expensive because a typical virtual address space is (much) greater than a typical physical address space. So a virtual address cache needs more address lines, more logic elements and more silicon (more power, too).
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook