Find an Item Stored in Memory: Assoc. Memory & Time Reduction

  • Thread starter Thread starter prashantgolu
  • Start date Start date
  • Tags Tags
    associative Memory
AI Thread Summary
The discussion centers on the efficiency of accessing stored data in memory by content rather than by address. It highlights that using content-addressable memory (CAM) allows for simultaneous comparison of all memory cells, significantly reducing search time compared to traditional methods that require sequential address lookups. The conversation emphasizes the advantages of hash tables and associative arrays, which can provide quick access to data through hashing, minimizing the need for exhaustive searches. The participants clarify that while traditional memory access involves reading and comparing one cell at a time, CAM enables parallel processing, leading to faster retrieval times. Additionally, examples of real-world applications of CAM, such as in cache systems and data compression algorithms, are mentioned, illustrating its practical relevance in computing.
prashantgolu
Messages
50
Reaction score
0
Time Required to find an item stored in memory can be reduced considerably if stored data can be identified for access by content of data rather than by an address

please explain me how...?
 
Technology news on Phys.org
prashantgolu said:
Time Required to find an item stored in memory can be reduced considerably if stored data can be identified for access by content of data rather than by an address

please explain me how...?

I may have understood you wrong but that clearly doesn't make sense. If you know where the data is stored (ie the address), it's only a few instructions to get a machine word of that data and feed it into a register.

The best computational way of getting the address of some bit of data given that you didn't know where it was stored would be to use a data structure that "points" to the data.

The absolute best case that I can think off the top of my head is a large-hash table with a hash algorithm that has a good collision property. If you don't have many items, it should have extremely good time [think O(1)] if your implementation is right.

If you didn't use any structures to keep track of anything and had to do a brute force search for data, you could do it since RAM is decent with regards to speed, but I just couldn't picture any programmer not using some kind of book keeping structure like a hash table or a binary tree or something along those lines to resolve pointers from some information like a pre-determined key.
 
Then what is the need of using associative memory...CAM...in that we search my content rather than address...
 
Time Required to find an item stored in memory can be reduced considerably if stored data can be identified for access by content of data rather than by an address

this is written in the topic "associative memory" in the computer system architecture book by mano

somebody explain how...
 
prashantgolu said:
Time Required to find an item stored in memory can be reduced considerably if stored data can be identified for access by content of data rather than by an address

this is written in the topic "associative memory" in the computer system architecture book by mano

somebody explain how...

I think perhaps you are not understanding the question itself, because it is worded in a nonsensical way, as chiro pointed out.

What it SHOULD say, to make sense, is something like this: The time required to find an item stored in memory can be much less if the data is stored in a content-addressable form rather than having to LOOK FOR THE ADDRESS each time.

That is, if you have data in a random table, the only way you can find something is to look all the way through the table. On the average, this means you'll look half way through the table for each search. If your table is a few million items long, that could REALLY take a while.

BUT ... if you store your data in a content-addressable way using a hash code, then all you need to do is apply the hash algorithm to the part of the data you have and you can go immediately to the rest ... this is a BIT of an oversimplification since I'm not accounting here for dealing with hash code collisions but as long as your table is reasonably sparse, that doesn't really add much to the time, and the time to do generate the hash code is likely to be on the order of the time it takes to do a few or a few dozen item searches ... WAAAY less that searching half of a several-million item table.

I cannot explain it more fully than that without giving a full tutorial on hash codes and associative arrays, and I think you should do some basic reading on these topics if you need further understanding.
 
A content addressable memory (fully associative) has one comparator for each memory cell. When a value is to be searched for within the content addressable memory, all of the memory cells are read and compared at the same time (in parallel), in one memory + one compare cycle, and then if there is a match, the address is returned based on which cell had the match.

In a normal search for a value in memory only one memory cell is read and compared at a time. In the simplest implementation, every cell in the memory would have to be read and compared one at a time until a match was found or all memory cells were read (and no match found).

In some situations, multiple matches are possible. The comparator outputs from the CAM go into an array of bits, and one of the matches will be converted into an address. This could be repeated to obtain a list of addresses if needed.
 
Last edited:
rcgldr said:
A content addressable memory (fully associative) has one comparator for each memory cell. When a value is to be searched for within the content addressable memory, all of the memory cells are read and compared at the same time (in parallel), in one memory + one compare cycle, and then if there is a match, the address is returned based on which cell had the match.

In a normal search for a value in memory only one memory cell is read and compared at a time. In the simplest implementation, every cell in the memory would have to be read and compared one at a time until a match was found or all memory cells were read (and no match found).

In some situations, multiple matches are possible. The comparator outputs from the CAM go into an array of bits, and one of the matches will be converted into an address. This could be repeated to obtain a list of addresses if needed.

OK, that's weird. I never heard of such a thing. Do they exist in the real world or is this just a math model? I mean, it's clear that they would work but I can't see anyone actually building one.

Thanks
 
rcgldr said:
A content addressable memory (fully associative) has one comparator for each memory cell.

phinds said:
I never heard of such a thing. Do they exist in the real world or is this just a math model?
Some older computers used a fully associative cache. In the computer tape peripheral industry, content addressable memories (fully associative) are used for the sliding window type (LZ1 / LZ77) compression schemes.

A fully associative memory is more often called a content addressable memory. This wiki link mentions a few devices that use CAMs.

http://en.wikipedia.org/wiki/Content-addressable_memory
 
Back
Top