Alzheimer's disease in neural networks

In summary, the conversation discusses the possibility of observing a similar effect to Alzheimer's disease in neural networks. The minimum complexity needed for this effect to occur is also questioned. The conversation also talks about the potential of using a joint neural physical model to study aging and find brain patterns for a long and healthy life. The concept of "over training" in neural networks is also mentioned as a possible cause for this effect.
  • #1
ShayanJ
Insights Author
Gold Member
2,810
604
After watching "Still Alice"(starring Juliane Moore) which is about an Alzheimer's patient, I started thinking about this disease a little bit and suddenly it occurred to me that its plausible to expect seeing such an effect in neural networks too. I mean, imagine you have a neural network that solves e.g. Schrodinger's equation. As time passes and it solves more equations, it should become better at it. But suddenly you see that its ability is decreasing gradualy as it solves more equations which should seem strange.
Does anyone know about an observation of such an effect in a neural network? Is there any theoretical prediction of such an effect? If yes, what's the minimum complexity needed for such an effect to happen?
Thanks
 
  • Like
Likes Demystifier and Greg Bernhardt
Technology news on Phys.org
  • #2
From Wiki: "'Neural' if it possesses the following characteristics:
  1. contains sets of adaptive weights, i.e. numerical parameters that are tuned by a learning algorithm, and
  2. capability of approximating non-linear functions of their inputs."
    Shyan said:
    minimum complexity
Wouldn't think there would have to be more than a single set of "adaptive" weights/parameters being corrupted to observe a machine analog of Alzheimer's or some other dementia, so long as those parameters are feeding back to the learning algorithm.

The conjugate to your question might be, "What is the minimum complexity for a network to repair/heal/maintain itself?"
 
  • #3
Shyan said:
After watching "Still Alice"(starring Juliane Moore) which is about an Alzheimer's patient, I started thinking about this disease a little bit and suddenly it occurred to me that its plausible to expect seeing such an effect in neural networks too. I mean, imagine you have a neural network that solves e.g. Schrodinger's equation. As time passes and it solves more equations, it should become better at it. But suddenly you see that its ability is decreasing gradualy as it solves more equations which should seem strange.
Does anyone know about an observation of such an effect in a neural network? Is there any theoretical prediction of such an effect? If yes, what's the minimum complexity needed for such an effect to happen?
Thanks
Great question. They may have to add in some more features to actually model it though, check this out:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2927852/

There is increasing evidence that the nervous system may act as a central regulator of ageing by coordinating the physiology of extraneural tissues. In worms, a number of different mutations that disrupt the function of sensory neurons extend lifespan (90). Furthermore, ablation of specific neurons can increase lifespan in worms (91) and flies (67).

The impression that comes from that paper is that aging is actually a feedback loop, the brain ages the body, the body ages the brain. So certain physical factors can induce mental aging, and certain neural factors can induce physical aging.

Of course, none of it is a show stopper for what you're talking about, rather it reveals a potential power of it when fully modeled: (A joint neural physical model) It can help us find brain patterns that make for long healthy life.
 
  • #4
This may be relevant. In neural networks there is such a thing as "over training" the network. That is when the network is no longer forming the generalities that you want it to and is starting to form too detailed a fit to the input data. When you train a NN to the right amount of training data, the curve is forming a generalization that fits the training data. When you use too much training data, that curve is being forced exactly through the training data and "memorizing" details that you want it to ignore. That is less useful when applied to new data that is slightly different from the training data.
 
  • #5
Bystander said:
From Wiki: "'Neural' if it possesses the following characteristics:
  1. contains sets of adaptive weights, i.e. numerical parameters that are tuned by a learning algorithm, and
  2. capability of approximating non-linear functions of their inputs."
Wouldn't think there would have to be more than a single set of "adaptive" weights/parameters being corrupted to observe a machine analog of Alzheimer's or some other dementia, so long as those parameters are feeding back to the learning algorithm.

The conjugate to your question might be, "What is the minimum complexity for a network to repair/heal/maintain itself?"

Given that nothing else than those parameters change in NNs (at least as far as I know), then yes, its plausible to think they somehow become corrupted and cause the effect. But then given that NNs are designed to improve those parameters as they proceed, the question is how that corruption can happen. FactChecker suggested overtraining which may give a clue.

And thanks for the conjugate question. It will be interesting if those two minimums differ.
Fooality said:
Great question. They may have to add in some more features to actually model it though, check this out:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2927852/

There is increasing evidence that the nervous system may act as a central regulator of ageing by coordinating the physiology of extraneural tissues. In worms, a number of different mutations that disrupt the function of sensory neurons extend lifespan (90). Furthermore, ablation of specific neurons can increase lifespan in worms (91) and flies (67).

The impression that comes from that paper is that aging is actually a feedback loop, the brain ages the body, the body ages the brain. So certain physical factors can induce mental aging, and certain neural factors can induce physical aging.

Of course, none of it is a show stopper for what you're talking about, rather it reveals a potential power of it when fully modeled: (A joint neural physical model) It can help us find brain patterns that make for long healthy life.

That's a good point. We can't isolate our brains from our bodies so even if there is a similar effect in neural networks, it will have essential differences with Alzheimer's disease due to the absence of a model body. Of course researchers may do what you suggested and that will be interesting, but I think its still valuable to study the effect in NNs as they are now.(EDIT: if it still exists!)

FactChecker said:
This may be relevant. In neural networks there is such a thing as "over training" the network. That is when the network is no longer forming the generalities that you want it to and is starting to form too detailed a fit to the input data. When you train a NN to the right amount of training data, the curve is forming a generalization that fits the training data. When you use too much training data, that curve is being forced exactly through the training data and "memorizing" details that you want it to ignore. That is less useful when applied to new data that is slightly different from the training data.

Yeah, that may help. But training process stops at some point and the NN will start operating. What I intended, was a decline starting at a time when the NN is operating. I don't know whether this overtraining is relevant to that or not because I don't know enough about NNs.
 
Last edited:
  • #6
Shyan said:
That's a good point. We can't isolate our brains from our bodies so even if there is a similar effect in neural networks, it will have essential differences with Alzheimer's disease due to the absence of a model body. Of course researchers may do what you suggested and that will be interesting, but I think its still valuable to study the effect in NNs as they are now.(EDIT: if it still exists!)

Yeah, I agree. The general effect shouldn't be too hard to simulate - if you had for instance a simple NN binary classifier, a really simple model might just bring some chaos to the learned weights to bring uncertainty, based on a semi-educated guess about mechanisms of the disease. You could look at what sort of inputs survive these changes, and as you scale it up, learn about the kind of inputs last the longest as the disease progresses, possibly to bring better quality of life and communication methods to people in the early stages of dementia.
 
  • #7
Shyan said:
After watching "Still Alice"(starring Juliane Moore) which is about an Alzheimer's patient, I started thinking about this disease a little bit and suddenly it occurred to me that its plausible to expect seeing such an effect in neural networks too. I mean, imagine you have a neural network that solves e.g. Schrodinger's equation. As time passes and it solves more equations, it should become better at it. But suddenly you see that its ability is decreasing gradualy as it solves more equations which should seem strange.
Does anyone know about an observation of such an effect in a neural network? Is there any theoretical prediction of such an effect? If yes, what's the minimum complexity needed for such an effect to happen?
Thanks

In Alzheimer's the biochemistry of the network is changed, so overtraining is not the right theoretical concept.

At the theoretical level, Alzheimer' is not so different than if you suddenly removed a critical number of units from your artificial neural network. To take a simple example, in a Hopfield net, the memory capacity is monotonic in the number of units. Let's say you have trained a Hopfield net. If after training, you simply remove some critical number of units, the network will not be able to remember some of its memories.
 
  • Like
Likes ShayanJ, Jimster41 and Silicon Waffle
  • #8
I have seen cases where if you aren't careful about data selection a neural net that retrains or learns continuously on the response characteristics of a system it is controlling in closed-loop can forget the answer it had originally learned by making and observing mistakes, because it doesn't make those mistakes anymore! All it sees is its own optimal answer, which it eventually memorizes. Once overtrained this way, if disturbed, it can't find its way back to the solution it had discovered - and has to start by making mistakes all over again. This is a pretty big issue. One trick I know is to force it to always remember that original exploration (always train some on special old mistake rich data). Another trick is to confuse it a little bit regularly, to test its understanding of alternative control vectors. But you have to disturb the close-loop system to do that.

Although interesting, I don't think this is anything like Alzheimer's which as understand it is a biological disease that is eroding a formed network.
 
Last edited:
  • Like
Likes ShayanJ
  • #9
When you use too much training data, that curve is being forced exactly through the training data and "memorizing" details that you want it to ignore. That is less useful when applied to new data that is slightly different from the training data.

That's not quite right. I believe what you're talking about is called over fitting. It occurs when you have a neural network that is complicated enough to fit complicated functions, and you have too small a training set. Then the network fits the small training set very closely, but won't generalize. To solve this, you increase the size of the training set, or simplify the neural network.

An example of over fitting is shown in the image below. The higher degree polynomial is better able to exactly match the training data, but the lower degree polynomial is obviously better and more general.
 

Attachments

  • img37.png
    img37.png
    2.3 KB · Views: 570
  • #10
I imagine any dynamic network would erode. Neural networks learn in really weird ways sometimes, and they aren't always predictable.

IBM's Watson had to have part of it's linguistics database purged because it got into Urban Dictionary and started to learn to swear.
what-i-shart.jpg


Google's Deep Dreaming AI has seen a lot of pictures of dogs, so it sees dogs everywhere
181.jpg


There are weird things that humans just know that neural networks have to figure out, but find it difficult if they aren't provided with examples. For example, researchers at Google asked their AI to produce a picture of a dumbbell. The AI drew a picture of a dumbbell with a human arm attached to it. Every time it saw a dumbbell in a picture, there was an arm with it, so google's AI assumed that they should always be associated.
 
  • Like
Likes jim mcnamara and Jimster41
  • #11
Jimster41 said:
Too funny. I wouldn't be too surprised if once we have AI's of real order they are going to be pretty good at a) cracking us up, b) pi#@ing us off c) not much else. I am interested more in systems that can do what a reasonably smart, especially compliant, frog might be able to do 24/7 - but for the cost of .02 FTE's.
Some scientists have expressed great concern that inventing true AI will be existentially more dangerous than the invention of the nuclear bomb. Nuclear bombs can't launch themselves or modify their own behavior.
 
  • #12
Overview for non-biologists:
https://www.alz.org/braintour/3_main_parts.asp
There are 17 panels to view.

IMO -if you think about it - an "Alzheimer's brained cpu and RAM" would have dead and dying DDR SRAM components, short circuits, defective cores, etc. The southbridge would have partially failing I/O connections because the northbridge was shortcircuited in places. I understand that neural networking is largely software, but some of the posts seem to imply a little more. Alzheimers is completely in the physical realm. Manifestations and our human viewpoint transform it. I do not see it as software at all.

Basically I do not believe Von Neumann architecture maps very well to the human brain, either So the analogies, while interesting, need lots of work.

For example:
Due to neural plasticity our brains can "reroute" neuronal traffic and learn to use new parts of the brain to make up for a function that was diminished by damage to tissue elsewhere. Where is there a neural network or OS with the ability to realize physical components are damaged, some required function bit the dust, and then relearn it? Without the use of the required function. And a neural network has to figure out , a priori, that it has to learn to route communications around the bad parts to relearn that diminished ability? I do not know of something like that.

File systems and OS memory management can mark failed components - sectors of a disk, DDR SRAM components - and stop using them. About the closest thing to what we are discussing, recovery after damage, might be raided disks. But we programatically specifiy everything about "healing" when we construct a raid, and program the controller to find and mark bad disk sectors, limp along on functioning sectors. And to send an SOS to sysadmins before so much damage is incurred that data rescue becomes impossible and it requires a data restore operation.
 
Last edited:
  • Like
Likes ShayanJ

1. What is Alzheimer's disease in neural networks?

Alzheimer's disease in neural networks is a neurodegenerative disorder that affects the brain's ability to process and store information. It is characterized by the formation of abnormal protein deposits in the brain, which disrupts the communication between neurons and leads to memory loss and cognitive decline.

2. What are the symptoms of Alzheimer's disease in neural networks?

The most common symptoms of Alzheimer's disease in neural networks include memory loss, difficulty with language and communication, disorientation, and changes in mood and behavior. As the disease progresses, individuals may also experience difficulty with daily tasks and lose the ability to perform basic functions.

3. How is Alzheimer's disease in neural networks diagnosed?

Alzheimer's disease in neural networks is typically diagnosed through a combination of medical history, neurological exams, and cognitive tests. Brain imaging techniques such as MRI or PET scans may also be used to assess the extent of damage to the brain.

4. Is there a cure for Alzheimer's disease in neural networks?

Currently, there is no known cure for Alzheimer's disease in neural networks. However, there are treatments available that can help manage symptoms and slow the progression of the disease. These include medications, cognitive training, and lifestyle interventions such as exercise and a healthy diet.

5. Can Alzheimer's disease in neural networks be prevented?

There is no sure way to prevent Alzheimer's disease in neural networks, but research suggests that maintaining a healthy lifestyle, including regular exercise, a balanced diet, and staying mentally and socially engaged, may help reduce the risk of developing the disease.

Similar threads

  • Programming and Computer Science
Replies
1
Views
2K
Replies
1
Views
2K
  • Quantum Physics
Replies
0
Views
224
  • Special and General Relativity
Replies
29
Views
1K
  • STEM Academic Advising
Replies
24
Views
2K
Replies
7
Views
3K
  • Quantum Interpretations and Foundations
Replies
34
Views
4K
  • Sci-Fi Writing and World Building
Replies
21
Views
1K
  • Beyond the Standard Models
Replies
0
Views
1K
  • Special and General Relativity
Replies
3
Views
2K
Back
Top