Trouble understanding Virtual cache

  • Thread starter Thread starter Fionn00
  • Start date Start date
  • Tags Tags
    Virtual
AI Thread Summary
Virtual cache refers to cache systems that use virtual addresses instead of physical addresses, allowing for faster access since no physical address translation is needed when data is cached. When a processor accesses memory, it checks the cache using virtual addresses, and if a cache hit occurs, it retrieves data without converting to a physical address. This process is efficient as it reduces overhead, but it can be more complex and costly due to the larger address space of virtual memory compared to physical memory. In contrast, physical caches require address translation before accessing the cache, which adds processing time and complexity. Understanding the distinctions between virtual and physical caches is crucial for optimizing memory access and performance in computing systems.
Fionn00
Messages
12
Reaction score
0
Trouble understanding "Virtual cache"

Hi,

I'm having trouble understanding what virtual cache actually means.

I understand that to get a cache hit (physical cache) you index to the cache set and compare all the tags in that set with the tag in the cache address you are looking for.

I also understand how the memory management unit and the Transition look-aside buffer work for virtual memory. Page tables map, virtual addresses of each process, to physical addressees in memory.

But what does virtual cache mean? I mean if you give physical cache addresses a virtual address what are you achieving?
And also do virtual cache addresses get mapped using the same MMU as real memory?

I've read Wikipedia and such but I just can't understand why this would be useful.

Thanks!
 
Technology news on Phys.org
For most processors, the cache addresses are virtual addresses. If the data resides in cache, there's no need to obtain a physical address for that cached data. The conversion from virtual to physical is performed via descriptor tables outside of any cache operations.

On a side note, for memory mapped I/O like DMA that uses physical addresses, the pages of memory for that I/O need to be loaded and locked, then their addresses translated from virtual to physical for the memory mapped I/O to take place. For Windows this is done via a function called MmProbeAndLockPages().
 
Ok thanks, so if the processor wants to access memory, it sends the virtual address to the mmu and the cache which then works out if it resides in cache (it always resides in ram also)?
And if it resides in cache does it then go to the cache using a physical address (set no. , tag and offset) or does it access the cache in some other way? If so then how does physical cache differ?

Basically I can't see the difference between physical and virtual cache.
 
As mentioned in my last post, for most processors, if there is a cache hit, then no physical address mapping takes place, since there's no need to do this if the desired data is in the cache. How the rest of this is done for non-cache hits depends on the processor. X86 processors use a translation look aside buffer cache which is a type of content addressable memory (fully associative) to translate a sub-set of possible virtual addresses to physical addresses. If not in the table, then the descriptor tables that reside in ram are used. If the desired pages are currently not in RAM, then the operating system swaps out pages if needed and swaps in pages from the swap file. The descriptor tables are updated to reflect the updated swapped pages, and the TLB is updated to remove any swapped out pages (it may just be cleared, I don't know if it's possible to partiallly clear out the TLB).

There may be some processors that map virtual to physical addresses first, then use a cache based on physical addresses, but the mapping step would normally increase the overhead, so I'm not sure if any currently produced processors do this.
 
  • Like
Likes 1 person
Fionn00 said:
Basically I can't see the difference between physical and virtual cache.

There is (almost) no difference between those caches. The differences are in what's around them. For caches that are keyed by physical addresses, virtual to physical address translation must always take place before addressing the cache. Even if that is very efficient, it is still extra processing, which requires either more time or more silicon, or both. Caches keyed by virtual addresses eliminate this need, and the virtual to physical address translation is done only when there a real need to access memory.

Now the "almost" bit. Virtual address caches are somewhat more expensive because a typical virtual address space is (much) greater than a typical physical address space. So a virtual address cache needs more address lines, more logic elements and more silicon (more power, too).
 
  • Like
Likes 1 person
Dear Peeps I have posted a few questions about programing on this sectio of the PF forum. I want to ask you veterans how you folks learn program in assembly and about computer architecture for the x86 family. In addition to finish learning C, I am also reading the book From bits to Gates to C and Beyond. In the book, it uses the mini LC3 assembly language. I also have books on assembly programming and computer architecture. The few famous ones i have are Computer Organization and...
I have a quick questions. I am going through a book on C programming on my own. Afterwards, I plan to go through something call data structures and algorithms on my own also in C. I also need to learn C++, Matlab and for personal interest Haskell. For the two topic of data structures and algorithms, I understand there are standard ones across all programming languages. After learning it through C, what would be the biggest issue when trying to implement the same data...
Back
Top