Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Virtual Caches

  1. Apr 21, 2010 #1
    Hi all,

    I'm trying to understand the difference between virtually indexed virtually tagged cache and virtually indexed physically tagged cache. I read some material but not able to get the intuitive concept especially the virtually indexed virtually tagged cache .

    The tag in a cache is basically the location of the data in the physical memory right? In that case, how can a virtual address be set as a tag ?

    Any help is appreciated.
     
  2. jcsd
  3. Apr 22, 2010 #2

    rcgldr

    User Avatar
    Homework Helper

    Note I found the wiki cache entry structure section to be confusing so I proposed a change in the discussion page:

    http://en.wikipedia.org/wiki/Talk:CPU_cache#Cache_entry_structure

    wiki articles:

    http://en.wikipedia.org/wiki/CPU_cache#Address_translation

    http://en.wikipedia.org/wiki/Translation_lookaside_buffer

    Each cache row entry includes data, tag, and a valid bit (there may be other information for replacement algorithms). Each cache row entry is addressed by index (unless it's a fully associative cache), and the displacement is used to address the data within a cache row.

    Not if tag and index are both based on the virtual address, since the data is accessed from the cache, not from memory. The physical address isn't required once the data is loaded into a virtually addressed cache. The cache would probably still have physical address information, needed for writes to cache (so the writes could be flushed to physical memory) and for "snooping" for any writes to physically addressed memory, such as DMA type memory writes, in order to invalidate or update cache entries.

    The wiki article mentions using virtual indexed, physical tag for level 1 caches. For a non-associative cache, a portion of the virtual address is used to index cache row(s) (n rows for a n - way cache), the physical tag data is then read from the cache row(s) in parallel with the TLB virtual to physical address translation, then the cache and TLB physical tags are compared. Since the index to the cache is based on a virtual address, the physical tag in a cache row entry needs to include all address bits except for the displacement portion.

    For a fully associative cache, the index isn't used, so the address tag needs to be all virtual or all physical.
     
    Last edited: Apr 22, 2010
  4. Apr 22, 2010 #3
    Thank you for your reply!

    I was reading a paper about the Synonym Lookaside buffer. If multiple virtual addresses map to the same physical address, then they are synonyms. The ambiguity is resolved by having one as primary virtual address and the rest as secondary virtual address and providing s synonym lookaside buffer to translate the secondary virtual addresses to the primary.

    The paper says that the Translation Lookaside buffer has the information whether the virtual address is primary/secondary. I can't understand how the TLB has this information. Can someone please explain?

    Here's the link to the paper:
    http://www.computer.org/portal/web/csdl/doi/10.1109/TC.2008.108
     
  5. Apr 22, 2010 #4

    rcgldr

    User Avatar
    Homework Helper

    I'm not a member of IEEE, so I don't have acces to that paper. I was able to read the summary.

    Not being able to read the paper, I'm not sure why this was stated.

    The summary refers to a level 1 cache, which I assume referes to the level 1 cache as implemented on a x86 type CPU, which also uses a TLB. These use virtual address bits for indexing, but physical address bits (all but displacement address bits) for the tags. You don't get synonym's with this scheme, since there are no synonyms there aren't any primary / secondary virtual addresses.

    The proposed scheme appears to want to change the scheme to one accessed solely by virtual address bits, which works, but you'd still need physical address information stored in the cache in order to maintain cache cohernency by "snooping" or "snarfing" the memory bus for any external writes or paging activity.

    http://en.wikipedia.org/wiki/Cache_coherence

    What I don't understand is why you'd want a cache design that would allow a synonym to be created. It seems this can only happen if you allow a cache data block boundaries to overlap with each other, rather than keeping them separated to fixed size blocks based on physical memory boundaries, perhaps based on the largest single ram memory access, which would be triple wide on a core i7 cpu.
     
  6. Apr 22, 2010 #5
    I'd like to know what a short miss is in cache. How is it different from cache miss?
     
  7. Apr 22, 2010 #6

    rcgldr

    User Avatar
    Homework Helper

    A cache miss means that the datablock requested by the virtual address isn't in the cache. A "short miss" apparently indicates some type of conflict restricting writing to a cached datablock. I'm not familiar with this, and I don't know how consistently the term "short miss" is used. I did a web search and found a patent that includes the term "short miss".

    http://www.freepatentsonline.com/5113514.html

    In the patent description, the same datablock can end up in two (or more) caches, and if so the cache state for that datablock is marked as shared. A short miss occurs when a write is done to cached operand data marked as shared (because that same data is cached in a secondary cache as well). The patent describes how it deals with such situations.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook