Modelling Memory Consolidation using Neural Networks

In summary, the author is looking to model memory consolidation using neural networks and is considering using a Hopfield network to train a second Kohonen network (neocortex). They are worried about catastrophic interference, and are looking for advice on how to minimize it.
  • #1
gadgets
11
0
Hi guys,

If there's anyone out there who has knowledge in this area, I'm seeking to find out how to model memory consolidation using neural networks.

I was thinking of using a Hopfield network to train another Hopfield network. The first network would represent the hippocampus, and the second network would represent the neocortex. I thought this was appropriate since the hippocampus actually acts as a "teacher" to the neocortex.

I'm wondering if my thinking is correct?

Any advice would be greatly appreciated.
 
Biology news on Phys.org
  • #2
Another some reading, I think the hippocampus should be implemented as Kohonen network, since it performs off-line (unsupervised) learning. And this network will act as a teacher to a hopfield network (neocortex).

Can any experts advise me on this matter?
 
  • #3
Nobody knows?
 
  • #4
many ways you can try it. the outline you listed is worth a try.There are many theories out there...and just as many on how to code it.

search online for a researcher:Sue becker
 
  • #5
Thanks.

Do you know how should information be passed to the other network?

I was thinking, the first Kohonen network (Hippocampus) can be trained with a training set to get the desired weights. How can I use this network to train the 2nd Kohonen network (Neocortex)?

I read about pseudorehearsals but don't quite understand the concept. Does it mean that I should just use random inputs at the Hippocampus or something like that?
 
  • #6
gadgets said:
I was thinking, the first Kohonen network (Hippocampus) can be trained with a training set to get the desired weights. How can I use this network to train the 2nd Kohonen network (Neocortex)?
What are you trying to achieve with the second Kohonen network? :confused:
 
  • #7
MeJennifer said:
What are you trying to achieve with the second Kohonen network? :confused:

The 2nd Kohonen network is akin to the Neocortex, where all the long term memory is stored. I'm trying to model the concept of consolidation, whereby the Hippocampus learns and transfers the memory to the Neocortex during REM/NREM sleep.

Let me know if I'm going wrong somewhere.

Thanks.
 
  • #8
if you use a kohonen network...it'll be very sparse IMO.
again try it out if it doesn't work then you will know.
You might be interested to read up on a self-motivated researcher
in the UK named Steve Grand(look up his book on "Growing up with Lucy")
 
  • #9
gadgets said:
The 2nd Kohonen network is akin to the Neocortex, where all the long term memory is stored. I'm trying to model the concept of consolidation, whereby the Hippocampus learns and transfers the memory to the Neocortex during REM/NREM sleep.

Let me know if I'm going wrong somewhere.

Thanks.
Well perhaps I miss something.
Once the first Kohonen network "clusters" the significant statistical coincidences of the input neurons, what could the second one possibly improve on that?
 
  • #10
MeJennifer said:
Well perhaps I miss something.
Once the first Kohonen network "clusters" the significant statistical coincidences of the input neurons, what could the second one possibly improve on that?

Actually, I'm still feeling in the dark, trying to figure which is the right way to have it implemented.

I read a few papers and came to know about catastrophic interference, which is the reason why some researchers have proposed dual-network memory models to resolve this problem.

Then I came to wonder about how the first network (hippocampus) could possibly transfer or "teach" the second network (neocortex). I read that Robins proposed the idea of pseudo-patterns. From what I understand, this means creating random inputs to feed to the artificial hippocampus. These pseudo patterns could then be used to train neocortex.

Thus, I had thought of using these two Kohonen networks (maybe a wrong idea), whereby the first one learns and then transfers to the 2nd.

If there're any experts in this area around, I'd appreciate any comments.
 
  • #11
Perhaps a more correct way would be to use a Hopfield Network as the hippocampus to perform the initial learning. The neocortex could be implemented as an MLP. Thus, during a consolidation phase, random inputs could be fed to the Hopfield to obtain the trained outputs. These values could then facilitate the training of the MLP (neocortex).

I'm still trying to figure out how catastrophic interference is minimized in this scenario.

Pardon my poor knowledge/understanding in this area. Trying my best to make sense of the whole thing. Getting a little upset - must be my stupid amygdala at work. :frown:
 
  • #12
neurocomp2003 said:
if you use a kohonen network...it'll be very sparse IMO.
again try it out if it doesn't work then you will know.
You might be interested to read up on a self-motivated researcher
in the UK named Steve Grand(look up his book on "Growing up with Lucy")

Interesting biography. It's motivating to see an individual with so much drive and self-initiative to delve into such a complex area of study.. It's a pity no one is funding the Lucy project anymore.
 

1. What is memory consolidation?

Memory consolidation is the process by which memories are strengthened and stored in the brain. This process involves the formation of new neural connections and the reinforcement of existing ones, allowing for long-term storage of information.

2. How do neural networks play a role in memory consolidation?

Neural networks are a type of artificial intelligence that can simulate the way the brain processes information. They are used in memory consolidation research to model how the brain forms and strengthens neural connections to store memories.

3. What are the benefits of using neural networks for memory consolidation research?

Using neural networks allows for a more detailed and comprehensive understanding of the complex processes involved in memory consolidation. It also allows for the testing of various hypotheses and simulations that may not be possible in traditional research methods.

4. What are some limitations of using neural networks in memory consolidation research?

One limitation is that neural networks are only simulations and may not fully capture the complexity of the human brain. Additionally, the data used to train the neural network may not accurately reflect real-life scenarios, leading to potential biases in the results.

5. How can the findings from modelling memory consolidation using neural networks be applied in real life?

The insights gained from these studies can potentially be applied in fields such as education, where understanding how memories are formed and stored can aid in developing more effective learning strategies. It can also have implications in treating memory-related disorders and improving memory retention in individuals.

Similar threads

  • Computing and Technology
Replies
4
Views
1K
  • Programming and Computer Science
2
Replies
50
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
186
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
979
Replies
6
Views
725
  • Programming and Computer Science
Replies
1
Views
903
  • Programming and Computer Science
Replies
22
Views
3K
  • Programming and Computer Science
Replies
1
Views
2K
  • Biology and Medical
Replies
1
Views
2K
Replies
6
Views
2K
Back
Top