Understanding Virtually Indexed, Virtually Tagged Caches

  • Thread starter Thread starter computerorg
  • Start date Start date
Click For Summary

Discussion Overview

The discussion focuses on understanding the differences between virtually indexed, virtually tagged caches and virtually indexed, physically tagged caches. Participants explore concepts related to cache architecture, including the role of tags, synonyms in virtual addresses, and specific cache behaviors like "short misses." The conversation includes theoretical aspects and technical clarifications regarding cache design and operation.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant seeks clarity on how a virtual address can serve as a tag in a virtually indexed, virtually tagged cache, questioning the relationship between tags and physical memory locations.
  • Another participant explains that in a virtually addressed cache, the physical address is not required once data is loaded, but physical address information may still be necessary for cache coherence and write operations.
  • Discussion on synonyms in virtual addresses arises, with one participant referencing a paper on the Synonym Lookaside Buffer and questioning how the TLB identifies primary and secondary virtual addresses.
  • A different participant, unable to access the paper, expresses skepticism about the existence of synonyms in a cache design that uses physical address bits for tags, suggesting that this design avoids the issue of synonyms altogether.
  • Participants discuss the concept of "short misses," with one noting that it may refer to conflicts that restrict writing to a cached data block, while another mentions a patent that describes scenarios involving shared data blocks across multiple caches.

Areas of Agreement / Disagreement

Participants express differing views on the implications of using virtual addresses as tags and the existence of synonyms in cache designs. The discussion remains unresolved regarding the definitions and implications of "short misses" and the specifics of cache coherence mechanisms.

Contextual Notes

There are limitations in understanding the nuances of cache design, particularly regarding the handling of synonyms and the implications of different indexing schemes. Some participants reference external materials, which may not be universally accessible, leading to gaps in the discussion.

computerorg
Messages
3
Reaction score
0
Hi all,

I'm trying to understand the difference between virtually indexed virtually tagged cache and virtually indexed physically tagged cache. I read some material but not able to get the intuitive concept especially the virtually indexed virtually tagged cache .

The tag in a cache is basically the location of the data in the physical memory right? In that case, how can a virtual address be set as a tag ?

Any help is appreciated.
 
Technology news on Phys.org
computerorg said:
difference between virtually indexed virtually tagged cache and virtually indexed physically tagged cache.

Note I found the wiki cache entry structure section to be confusing so I proposed a change in the discussion page:

http://en.wikipedia.org/wiki/Talk:CPU_cache#Cache_entry_structure

wiki articles:

http://en.wikipedia.org/wiki/CPU_cache#Address_translation

http://en.wikipedia.org/wiki/Translation_lookaside_buffer

Each cache row entry includes data, tag, and a valid bit (there may be other information for replacement algorithms). Each cache row entry is addressed by index (unless it's a fully associative cache), and the displacement is used to address the data within a cache row.

The tag in a cache is basically the location of the data in the physical memory right?
Not if tag and index are both based on the virtual address, since the data is accessed from the cache, not from memory. The physical address isn't required once the data is loaded into a virtually addressed cache. The cache would probably still have physical address information, needed for writes to cache (so the writes could be flushed to physical memory) and for "snooping" for any writes to physically addressed memory, such as DMA type memory writes, in order to invalidate or update cache entries.

The wiki article mentions using virtual indexed, physical tag for level 1 caches. For a non-associative cache, a portion of the virtual address is used to index cache row(s) (n rows for a n - way cache), the physical tag data is then read from the cache row(s) in parallel with the TLB virtual to physical address translation, then the cache and TLB physical tags are compared. Since the index to the cache is based on a virtual address, the physical tag in a cache row entry needs to include all address bits except for the displacement portion.

For a fully associative cache, the index isn't used, so the address tag needs to be all virtual or all physical.
 
Last edited:
Thank you for your reply!

I was reading a paper about the Synonym Lookaside buffer. If multiple virtual addresses map to the same physical address, then they are synonyms. The ambiguity is resolved by having one as primary virtual address and the rest as secondary virtual address and providing s synonym lookaside buffer to translate the secondary virtual addresses to the primary.

The paper says that the Translation Lookaside buffer has the information whether the virtual address is primary/secondary. I can't understand how the TLB has this information. Can someone please explain?

Here's the link to the paper:
http://www.computer.org/portal/web/csdl/doi/10.1109/TC.2008.108
 
computerorg said:
I was reading a paper about the Synonym Lookaside buffer. If multiple virtual addresses map to the same physical address, then they are synonyms.
I'm not a member of IEEE, so I don't have acces to that paper. I was able to read the summary.

The paper says that the Translation Lookaside buffer has the information whether the virtual address is primary/secondary. I can't understand how the TLB has this information.
Not being able to read the paper, I'm not sure why this was stated.

The summary refers to a level 1 cache, which I assume referes to the level 1 cache as implemented on a x86 type CPU, which also uses a TLB. These use virtual address bits for indexing, but physical address bits (all but displacement address bits) for the tags. You don't get synonym's with this scheme, since there are no synonyms there aren't any primary / secondary virtual addresses.

The proposed scheme appears to want to change the scheme to one accessed solely by virtual address bits, which works, but you'd still need physical address information stored in the cache in order to maintain cache cohernency by "snooping" or "snarfing" the memory bus for any external writes or paging activity.

http://en.wikipedia.org/wiki/Cache_coherence

What I don't understand is why you'd want a cache design that would allow a synonym to be created. It seems this can only happen if you allow a cache data block boundaries to overlap with each other, rather than keeping them separated to fixed size blocks based on physical memory boundaries, perhaps based on the largest single ram memory access, which would be triple wide on a core i7 cpu.
 
I'd like to know what a short miss is in cache. How is it different from cache miss?
 
computerorg said:
I'd like to know what a short miss is in cache. How is it different from cache miss?
A cache miss means that the datablock requested by the virtual address isn't in the cache. A "short miss" apparently indicates some type of conflict restricting writing to a cached datablock. I'm not familiar with this, and I don't know how consistently the term "short miss" is used. I did a web search and found a patent that includes the term "short miss".

http://www.freepatentsonline.com/5113514.html

In the patent description, the same datablock can end up in two (or more) caches, and if so the cache state for that datablock is marked as shared. A short miss occurs when a write is done to cached operand data marked as shared (because that same data is cached in a secondary cache as well). The patent describes how it deals with such situations.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
Replies
2
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 10 ·
Replies
10
Views
5K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
10
Views
4K
  • · Replies 46 ·
2
Replies
46
Views
6K
  • · Replies 75 ·
3
Replies
75
Views
7K
  • · Replies 2 ·
Replies
2
Views
6K