Modelling Memory Consolidation using Neural Networks

Click For Summary

Discussion Overview

The discussion revolves around modeling memory consolidation using neural networks, specifically exploring the roles of different types of networks such as Hopfield and Kohonen networks in simulating the functions of the hippocampus and neocortex. Participants are examining theoretical frameworks and potential implementations related to memory transfer and learning processes during sleep.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant suggests using a Hopfield network to represent the hippocampus and another Hopfield network for the neocortex, proposing that the hippocampus acts as a teacher.
  • Another participant proposes that the hippocampus should be modeled as a Kohonen network due to its unsupervised learning capabilities, which would then teach a Hopfield network representing the neocortex.
  • Questions arise regarding how information should be passed between the two networks, with one participant inquiring about the use of pseudorehearsals and random inputs.
  • Concerns are raised about the effectiveness of a Kohonen network due to its sparsity, with suggestions to explore other models or approaches.
  • Discussion includes the concept of catastrophic interference and the idea of dual-network memory models as a potential solution to this issue.
  • One participant contemplates the transfer process from the hippocampus to the neocortex and references the concept of pseudo-patterns for training the second network.
  • Another participant suggests using a Hopfield network for initial learning and an MLP for the neocortex, questioning how to minimize catastrophic interference in this setup.

Areas of Agreement / Disagreement

Participants express differing views on the appropriate neural network models to use for simulating memory consolidation, with no consensus reached on the best approach. Various models are proposed, and questions remain about the mechanisms of information transfer and the implications of different network architectures.

Contextual Notes

Participants mention limitations related to understanding the transfer of information between networks and the challenges posed by catastrophic interference. There is also uncertainty regarding the effectiveness of the proposed models and the specific implementation details.

Who May Find This Useful

This discussion may be of interest to researchers and students in neuroscience, artificial intelligence, and cognitive science, particularly those exploring neural network applications in modeling memory processes.

gadgets
Messages
11
Reaction score
0
Hi guys,

If there's anyone out there who has knowledge in this area, I'm seeking to find out how to model memory consolidation using neural networks.

I was thinking of using a Hopfield network to train another Hopfield network. The first network would represent the hippocampus, and the second network would represent the neocortex. I thought this was appropriate since the hippocampus actually acts as a "teacher" to the neocortex.

I'm wondering if my thinking is correct?

Any advice would be greatly appreciated.
 
Biology news on Phys.org
Another some reading, I think the hippocampus should be implemented as Kohonen network, since it performs off-line (unsupervised) learning. And this network will act as a teacher to a hopfield network (neocortex).

Can any experts advise me on this matter?
 
Nobody knows?
 
many ways you can try it. the outline you listed is worth a try.There are many theories out there...and just as many on how to code it.

search online for a researcher:Sue becker
 
Thanks.

Do you know how should information be passed to the other network?

I was thinking, the first Kohonen network (Hippocampus) can be trained with a training set to get the desired weights. How can I use this network to train the 2nd Kohonen network (Neocortex)?

I read about pseudorehearsals but don't quite understand the concept. Does it mean that I should just use random inputs at the Hippocampus or something like that?
 
gadgets said:
I was thinking, the first Kohonen network (Hippocampus) can be trained with a training set to get the desired weights. How can I use this network to train the 2nd Kohonen network (Neocortex)?
What are you trying to achieve with the second Kohonen network? :confused:
 
MeJennifer said:
What are you trying to achieve with the second Kohonen network? :confused:

The 2nd Kohonen network is akin to the Neocortex, where all the long term memory is stored. I'm trying to model the concept of consolidation, whereby the Hippocampus learns and transfers the memory to the Neocortex during REM/NREM sleep.

Let me know if I'm going wrong somewhere.

Thanks.
 
if you use a kohonen network...it'll be very sparse IMO.
again try it out if it doesn't work then you will know.
You might be interested to read up on a self-motivated researcher
in the UK named Steve Grand(look up his book on "Growing up with Lucy")
 
gadgets said:
The 2nd Kohonen network is akin to the Neocortex, where all the long term memory is stored. I'm trying to model the concept of consolidation, whereby the Hippocampus learns and transfers the memory to the Neocortex during REM/NREM sleep.

Let me know if I'm going wrong somewhere.

Thanks.
Well perhaps I miss something.
Once the first Kohonen network "clusters" the significant statistical coincidences of the input neurons, what could the second one possibly improve on that?
 
  • #10
MeJennifer said:
Well perhaps I miss something.
Once the first Kohonen network "clusters" the significant statistical coincidences of the input neurons, what could the second one possibly improve on that?

Actually, I'm still feeling in the dark, trying to figure which is the right way to have it implemented.

I read a few papers and came to know about catastrophic interference, which is the reason why some researchers have proposed dual-network memory models to resolve this problem.

Then I came to wonder about how the first network (hippocampus) could possibly transfer or "teach" the second network (neocortex). I read that Robins proposed the idea of pseudo-patterns. From what I understand, this means creating random inputs to feed to the artificial hippocampus. These pseudo patterns could then be used to train neocortex.

Thus, I had thought of using these two Kohonen networks (maybe a wrong idea), whereby the first one learns and then transfers to the 2nd.

If there're any experts in this area around, I'd appreciate any comments.
 
  • #11
Perhaps a more correct way would be to use a Hopfield Network as the hippocampus to perform the initial learning. The neocortex could be implemented as an MLP. Thus, during a consolidation phase, random inputs could be fed to the Hopfield to obtain the trained outputs. These values could then facilitate the training of the MLP (neocortex).

I'm still trying to figure out how catastrophic interference is minimized in this scenario.

Pardon my poor knowledge/understanding in this area. Trying my best to make sense of the whole thing. Getting a little upset - must be my stupid amygdala at work. :frown:
 
  • #12
neurocomp2003 said:
if you use a kohonen network...it'll be very sparse IMO.
again try it out if it doesn't work then you will know.
You might be interested to read up on a self-motivated researcher
in the UK named Steve Grand(look up his book on "Growing up with Lucy")

Interesting biography. It's motivating to see an individual with so much drive and self-initiative to delve into such a complex area of study.. It's a pity no one is funding the Lucy project anymore.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
Replies
50
Views
5K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 22 ·
Replies
22
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
6K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
6
Views
2K