Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Alzheimer's disease in neural networks

  1. Aug 2, 2015 #1

    ShayanJ

    User Avatar
    Gold Member

    After watching "Still Alice"(starring Juliane Moore) which is about an Alzheimer's patient, I started thinking about this disease a little bit and suddenly it occurred to me that its plausible to expect seeing such an effect in neural networks too. I mean, imagine you have a neural network that solves e.g. Schrodinger's equation. As time passes and it solves more equations, it should become better at it. But suddenly you see that its ability is decreasing gradualy as it solves more equations which should seem strange.
    Does anyone know about an observation of such an effect in a neural network? Is there any theoretical prediction of such an effect? If yes, what's the minimum complexity needed for such an effect to happen?
    Thanks
     
  2. jcsd
  3. Aug 2, 2015 #2

    Bystander

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    From Wiki: "'Neural' if it possesses the following characteristics:
    1. contains sets of adaptive weights, i.e. numerical parameters that are tuned by a learning algorithm, and
    2. capability of approximating non-linear functions of their inputs."
    Wouldn't think there would have to be more than a single set of "adaptive" weights/parameters being corrupted to observe a machine analog of Alzheimer's or some other dementia, so long as those parameters are feeding back to the learning algorithm.

    The conjugate to your question might be, "What is the minimum complexity for a network to repair/heal/maintain itself?"
     
  4. Aug 3, 2015 #3
    Great question. They may have to add in some more features to actually model it though, check this out:
    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2927852/

    There is increasing evidence that the nervous system may act as a central regulator of ageing by coordinating the physiology of extraneural tissues. In worms, a number of different mutations that disrupt the function of sensory neurons extend lifespan (90). Furthermore, ablation of specific neurons can increase lifespan in worms (91) and flies (67).

    The impression that comes from that paper is that aging is actually a feedback loop, the brain ages the body, the body ages the brain. So certain physical factors can induce mental aging, and certain neural factors can induce physical aging.

    Of course, none of it is a show stopper for what you're talking about, rather it reveals a potential power of it when fully modeled: (A joint neural physical model) It can help us find brain patterns that make for long healthy life.
     
  5. Aug 3, 2015 #4

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    This may be relevant. In neural networks there is such a thing as "over training" the network. That is when the network is no longer forming the generalities that you want it to and is starting to form too detailed a fit to the input data. When you train a NN to the right amount of training data, the curve is forming a generalization that fits the training data. When you use too much training data, that curve is being forced exactly through the training data and "memorizing" details that you want it to ignore. That is less useful when applied to new data that is slightly different from the training data.
     
  6. Aug 3, 2015 #5

    ShayanJ

    User Avatar
    Gold Member

    Given that nothing else than those parameters change in NNs (at least as far as I know), then yes, its plausible to think they somehow become corrupted and cause the effect. But then given that NNs are designed to improve those parameters as they proceed, the question is how that corruption can happen. FactChecker suggested overtraining which may give a clue.

    And thanks for the conjugate question. It will be interesting if those two minimums differ.


    That's a good point. We can't isolate our brains from our bodies so even if there is a similar effect in neural networks, it will have essential differences with Alzheimer's disease due to the absence of a model body. Of course researchers may do what you suggested and that will be interesting, but I think its still valuable to study the effect in NNs as they are now.(EDIT: if it still exists!)

    Yeah, that may help. But training process stops at some point and the NN will start operating. What I intended, was a decline starting at a time when the NN is operating. I don't know whether this overtraining is relevant to that or not because I don't know enough about NNs.
     
    Last edited: Aug 3, 2015
  7. Aug 4, 2015 #6
    Yeah, I agree. The general effect shouldn't be too hard to simulate - if you had for instance a simple NN binary classifier, a really simple model might just bring some chaos to the learned weights to bring uncertainty, based on a semi-educated guess about mechanisms of the disease. You could look at what sort of inputs survive these changes, and as you scale it up, learn about the kind of inputs last the longest as the disease progresses, possibly to bring better quality of life and communication methods to people in the early stages of dementia.
     
  8. Aug 5, 2015 #7

    atyy

    User Avatar
    Science Advisor

    In Alzheimer's the biochemistry of the network is changed, so overtraining is not the right theoretical concept.

    At the theoretical level, Alzheimer' is not so different than if you suddenly removed a critical number of units from your artificial neural network. To take a simple example, in a Hopfield net, the memory capacity is monotonic in the number of units. Let's say you have trained a Hopfield net. If after training, you simply remove some critical number of units, the network will not be able to remember some of its memories.
     
  9. Aug 5, 2015 #8
    I have seen cases where if you aren't careful about data selection a neural net that retrains or learns continuously on the response characteristics of a system it is controlling in closed-loop can forget the answer it had originally learned by making and observing mistakes, because it doesn't make those mistakes anymore! All it sees is its own optimal answer, which it eventually memorizes. Once overtrained this way, if disturbed, it can't find its way back to the solution it had discovered - and has to start by making mistakes all over again. This is a pretty big issue. One trick I know is to force it to always remember that original exploration (always train some on special old mistake rich data). Another trick is to confuse it a little bit regularly, to test its understanding of alternative control vectors. But you have to disturb the close-loop system to do that.

    Although interesting, I don't think this is anything like Alzheimer's which as understand it is a biological disease that is eroding a formed network.
     
    Last edited: Aug 5, 2015
  10. Aug 5, 2015 #9
    That's not quite right. I believe what you're talking about is called over fitting. It occurs when you have a neural network that is complicated enough to fit complicated functions, and you have too small a training set. Then the network fits the small training set very closely, but won't generalize. To solve this, you increase the size of the training set, or simplify the neural network.

    An example of over fitting is shown in the image below. The higher degree polynomial is better able to exactly match the training data, but the lower degree polynomial is obviously better and more general.
     

    Attached Files:

  11. Aug 5, 2015 #10
    I imagine any dynamic network would erode. Neural networks learn in really weird ways sometimes, and they aren't always predictable.

    IBM's Watson had to have part of it's linguistics database purged because it got into Urban Dictionary and started to learn to swear.
    what-i-shart.jpg

    Google's Deep Dreaming AI has seen a lot of pictures of dogs, so it sees dogs everywhere
    181.jpg

    There are weird things that humans just know that neural networks have to figure out, but find it difficult if they aren't provided with examples. For example, researchers at Google asked their AI to produce a picture of a dumbbell. The AI drew a picture of a dumbbell with a human arm attached to it. Every time it saw a dumbbell in a picture, there was an arm with it, so google's AI assumed that they should always be associated.
     
  12. Aug 5, 2015 #11
    Some scientists have expressed great concern that inventing true AI will be existentially more dangerous than the invention of the nuclear bomb. Nuclear bombs can't launch themselves or modify their own behavior.
     
  13. Aug 6, 2015 #12

    jim mcnamara

    User Avatar

    Staff: Mentor

    Overview for non-biologists:
    https://www.alz.org/braintour/3_main_parts.asp
    There are 17 panels to view.

    IMO -if you think about it - an "Alzheimer's brained cpu and RAM" would have dead and dying DDR SRAM components, short circuits, defective cores, etc. The southbridge would have partially failing I/O connections because the northbridge was shortcircuited in places. I understand that neural networking is largely software, but some of the posts seem to imply a little more. Alzheimers is completely in the physical realm. Manifestations and our human viewpoint transform it. I do not see it as software at all.

    Basically I do not believe Von Neumann architecture maps very well to the human brain, either So the analogies, while interesting, need lots of work.

    For example:
    Due to neural plasticity our brains can "reroute" neuronal traffic and learn to use new parts of the brain to make up for a function that was diminished by damage to tissue elsewhere. Where is there a neural network or OS with the ability to realize physical components are damaged, some required function bit the dust, and then relearn it? Without the use of the required function. And a neural network has to figure out , a priori, that it has to learn to route communications around the bad parts to relearn that diminished ability? I do not know of something like that.

    File systems and OS memory management can mark failed components - sectors of a disk, DDR SRAM components - and stop using them. About the closest thing to what we are discussing, recovery after damage, might be raided disks. But we programatically specifiy everything about "healing" when we construct a raid, and program the controller to find and mark bad disk sectors, limp along on functioning sectors. And to send an SOS to sysadmins before so much damage is incurred that data rescue becomes impossible and it requires a data restore operation.
     
    Last edited: Aug 6, 2015
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Alzheimer's disease in neural networks
Loading...