Find an Item Stored in Memory: Assoc. Memory & Time Reduction

  • Thread starter Thread starter prashantgolu
  • Start date Start date
  • Tags Tags
    associative Memory
Click For Summary

Discussion Overview

The discussion revolves around the concept of associative memory and its potential to reduce the time required to find items stored in memory by allowing access based on the content of the data rather than its address. Participants explore theoretical implications, practical applications, and the mechanics of content addressable memory (CAM).

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • Some participants propose that accessing data by content rather than by address can significantly reduce search time.
  • Others argue that knowing the address allows for quick access, suggesting that using data structures like hash tables is more efficient for locating data.
  • A participant questions the necessity of associative memory, prompting discussions about its applications in address translation and data compression.
  • Several participants describe how content addressable memory operates, highlighting its parallel search capability compared to traditional memory access methods.
  • Some express skepticism about the practical existence of fully associative memory systems, questioning whether they are merely theoretical constructs or have real-world applications.

Areas of Agreement / Disagreement

Participants express differing views on the efficiency and necessity of associative memory versus traditional address-based access methods. There is no consensus on the superiority of one approach over the other, and the discussion remains unresolved regarding the practical implications of associative memory.

Contextual Notes

Participants note that the discussion involves assumptions about data structures and search algorithms, as well as the performance characteristics of different memory access methods. The implications of hash code collisions and the specifics of implementation are also mentioned but not fully explored.

Who May Find This Useful

This discussion may be of interest to those studying computer architecture, memory management, and data structures, as well as professionals involved in systems design and optimization.

prashantgolu
Messages
50
Reaction score
0
Time Required to find an item stored in memory can be reduced considerably if stored data can be identified for access by content of data rather than by an address

please explain me how...?
 
Technology news on Phys.org
prashantgolu said:
Time Required to find an item stored in memory can be reduced considerably if stored data can be identified for access by content of data rather than by an address

please explain me how...?

I may have understood you wrong but that clearly doesn't make sense. If you know where the data is stored (ie the address), it's only a few instructions to get a machine word of that data and feed it into a register.

The best computational way of getting the address of some bit of data given that you didn't know where it was stored would be to use a data structure that "points" to the data.

The absolute best case that I can think off the top of my head is a large-hash table with a hash algorithm that has a good collision property. If you don't have many items, it should have extremely good time [think O(1)] if your implementation is right.

If you didn't use any structures to keep track of anything and had to do a brute force search for data, you could do it since RAM is decent with regards to speed, but I just couldn't picture any programmer not using some kind of book keeping structure like a hash table or a binary tree or something along those lines to resolve pointers from some information like a pre-determined key.
 
Then what is the need of using associative memory...CAM...in that we search my content rather than address...
 
Time Required to find an item stored in memory can be reduced considerably if stored data can be identified for access by content of data rather than by an address

this is written in the topic "associative memory" in the computer system architecture book by mano

somebody explain how...
 
prashantgolu said:
Time Required to find an item stored in memory can be reduced considerably if stored data can be identified for access by content of data rather than by an address

this is written in the topic "associative memory" in the computer system architecture book by mano

somebody explain how...

I think perhaps you are not understanding the question itself, because it is worded in a nonsensical way, as chiro pointed out.

What it SHOULD say, to make sense, is something like this: The time required to find an item stored in memory can be much less if the data is stored in a content-addressable form rather than having to LOOK FOR THE ADDRESS each time.

That is, if you have data in a random table, the only way you can find something is to look all the way through the table. On the average, this means you'll look half way through the table for each search. If your table is a few million items long, that could REALLY take a while.

BUT ... if you store your data in a content-addressable way using a hash code, then all you need to do is apply the hash algorithm to the part of the data you have and you can go immediately to the rest ... this is a BIT of an oversimplification since I'm not accounting here for dealing with hash code collisions but as long as your table is reasonably sparse, that doesn't really add much to the time, and the time to do generate the hash code is likely to be on the order of the time it takes to do a few or a few dozen item searches ... WAAAY less that searching half of a several-million item table.

I cannot explain it more fully than that without giving a full tutorial on hash codes and associative arrays, and I think you should do some basic reading on these topics if you need further understanding.
 
A content addressable memory (fully associative) has one comparator for each memory cell. When a value is to be searched for within the content addressable memory, all of the memory cells are read and compared at the same time (in parallel), in one memory + one compare cycle, and then if there is a match, the address is returned based on which cell had the match.

In a normal search for a value in memory only one memory cell is read and compared at a time. In the simplest implementation, every cell in the memory would have to be read and compared one at a time until a match was found or all memory cells were read (and no match found).

In some situations, multiple matches are possible. The comparator outputs from the CAM go into an array of bits, and one of the matches will be converted into an address. This could be repeated to obtain a list of addresses if needed.
 
Last edited:
rcgldr said:
A content addressable memory (fully associative) has one comparator for each memory cell. When a value is to be searched for within the content addressable memory, all of the memory cells are read and compared at the same time (in parallel), in one memory + one compare cycle, and then if there is a match, the address is returned based on which cell had the match.

In a normal search for a value in memory only one memory cell is read and compared at a time. In the simplest implementation, every cell in the memory would have to be read and compared one at a time until a match was found or all memory cells were read (and no match found).

In some situations, multiple matches are possible. The comparator outputs from the CAM go into an array of bits, and one of the matches will be converted into an address. This could be repeated to obtain a list of addresses if needed.

OK, that's weird. I never heard of such a thing. Do they exist in the real world or is this just a math model? I mean, it's clear that they would work but I can't see anyone actually building one.

Thanks
 
rcgldr said:
A content addressable memory (fully associative) has one comparator for each memory cell.

phinds said:
I never heard of such a thing. Do they exist in the real world or is this just a math model?
Some older computers used a fully associative cache. In the computer tape peripheral industry, content addressable memories (fully associative) are used for the sliding window type (LZ1 / LZ77) compression schemes.

A fully associative memory is more often called a content addressable memory. This wiki link mentions a few devices that use CAMs.

http://en.wikipedia.org/wiki/Content-addressable_memory
 

Similar threads

  • · Replies 17 ·
Replies
17
Views
4K
  • · Replies 32 ·
2
Replies
32
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
2
Views
2K
Replies
3
Views
2K
Replies
6
Views
4K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
Replies
19
Views
3K
  • · Replies 29 ·
Replies
29
Views
12K