Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Is the brain analog, digital, both or

  1. Nov 22, 2011 #1
    is it way to simplistic to think of it in this context?
     
  2. jcsd
  3. Nov 22, 2011 #2

    atyy

    User Avatar
    Science Advisor

    One can sometimes see this language in describing the effect that a spike in one neuron has on another. It is a pretty good approximation to say that each isolated spike always has the same effect. However, this may not be a sufficiently good description in all circumstances: http://www.ncbi.nlm.nih.gov/pubmed/22031895

    Also, it has long been known that sequences of spikes do not necessarily have effects that are linear sums of isolated spikes. http://www.ncbi.nlm.nih.gov/pubmed/19145233
     
  4. Nov 22, 2011 #3

    apeiron

    User Avatar
    Gold Member

    No, it is a valid starting point for framing the questions. Note that people would also say much the same thing contrasting dynamic and computational, as well.

    The brain is not digital in any pure sense, but it does exploit digital "power" in important ways, such as the all-or-nothingness of an action potential spike, or the definite point-to-point connections made by axon fibres.

    So don't try to force the brain into an either/or bracket - that it has to be basically analog, or basically digital. Instead, the analog~digital dichotomy is a way of framing the extremes of what might be the case. It is a conceptual guide that is "too simple", but also useful in getting started.
     
  5. Nov 22, 2011 #4

    Pythagorean

    User Avatar
    Gold Member

    I approach neuroscience from the nonlinear dynamics perspective which has the intrinsic bias that things be analog.

    And definitely, if you model the neural system via Hodgkins-Huxley, it is most certainly analog.
     
  6. Nov 22, 2011 #5

    Evo

    User Avatar

    Staff: Mentor

    Waves are analog...
     
  7. Nov 22, 2011 #6

    Evo

    User Avatar

    Staff: Mentor

    Please only respond to the topic in the OP.
     
  8. Nov 22, 2011 #7

    Evo

    User Avatar

    Staff: Mentor

    To expand on this hopefully without going overboard.

    http://sulcus.berkeley.edu/wjf/AG_MakingSenseOfEEG.pdf
     
  9. Nov 23, 2011 #8

    apeiron

    User Avatar
    Gold Member

    Yep, Freeman was pretty much the sharpest thinker on this particular issue that I met and was a big influence on me. A philosopher scientist of the old school.
     
  10. Nov 23, 2011 #9
    With regards to Freeman, his paper "How the brain makes chaos to make sense of the world"

    http://sulcus.berkeley.edu/freemanwww/manuscripts/IC8/87.html

    describes how he used a system of non-linear delay-differential equations to model the olfactory system of a rabbit. That paper references an earlier paper

    "Simulation of Chaotic EEG Patterns with a Dynamic Model of the Olfactory System"

    which actually goes over the equations but I cannot find an accessible reference on the net for it.

    A quote from the first paper:

    With regards to the brain being analog or digital, I believe that depends on what part of the brain you're refering to. Aren't spike-trains somewhat digital? And whether a neuron decides to fire and on the firing frequency depends on a variable threshold of the sum total of other neurons impinging on it so is that not analog? And the concepts of mind and consciousness I believe are neither but rather dynamic and emergent.
     
    Last edited: Nov 23, 2011
  11. Nov 23, 2011 #10

    apeiron

    User Avatar
    Gold Member

    Yes, the brain seems to be both things. And for a long time, like most people, I believed it had to be one of these things basically.

    So it could be that the brain is digital/computational, and all the messy dynamism was simply biological noise - background stuff to be filtered out or coarse-grained away.

    Or it could be that the brain is essentially dynamical/analog, and so needs to be described in the language of attractors and feedback.

    But then there was obviously a third ontological choice that the brain might be "something else" that we hadn't yet been able to articulate, but of which we had a dual description in terms of computationalist and dynamicist modelling.

    So we had invented two kinds of maths, both of which could be bent to describe what we were seeing. But the thing itself was still some "hybrid" that deserved its own further proper mathematical basis.

    Within neuroscience, I found you had dynamicists like Freeman, Kelso and Friston who were doing a good job of creating dual or hybrid descriptions - explaining how fundamentally dynamic material systems could give rise to emergently digital properties.

    So this was making the case (as Pythagorean says) that the base reality was "analog" (dynamical is a better word really) and that the computational features were emergent - but also then did actual work. They were not epiphenomenal but functional or causal. Spikes needed to be spikes for things to happen.

    So stuff like Kelso's work on co-ordination dynamics, and Friston's work on neural transients, was promising, but also seemed to lack something essential. They did justice to the dynamical biological basis but not, it seemed to me, the formal computational aspect - the other half of what was going on.

    It was that which led me to theoretical biologists and hierarchy theorists like Stan Salthe, Howard Pattee and Robert Rosen in particular, who were tackling the issue of the role of information and symbol in the organisation of living systems in the most general possible manner.

    Genes look digital, and they control the analog world of cellular metabolism. There, the distinction between the two realms looks quite sharp, and so easier to understand. With neurons, spikes, synapses, dendrites, axons, it is all much more entangled and messy. So it seemed a good strategy to see how the problem of "analog or digital" gets tackled in theoretical biology so as to apply those lessons to the more difficult, intertwined, case of theoretical neuroscience.

    That led to learning about current work in semiotics, epistemic cuts, modelling relations, MR systems, infodynamics, etc. This is an emerging discipline in itself. Founded you could say on the framing question - analog or digital?

    So you have a series of question in fact. Is the brain analog or digital? Is the body analog or digital? Is reality itself analog or digital?

    Or as I believe the case, is it always a hybrid, a compound? And so we need to work towards a dualised description (in the manner of Salthe's infodynamics for example) to do justice to the fact that complexity is a product of both material and formal cause.

    The state of the art approach in neuroscience currently is I feel Friston's Bayesian brain and free energy principle.

    http://www.fil.ion.ucl.ac.uk/~karl/The free-energy principle A unified brain theory.pdf

    This takes advantage of the fact that thermodynamics is already a hybrid description of reality - dynamics described in terms of information. And also smuggles in finality. A system can have a goal.

    So Freeman was probably the best example of what a hybrid approach to brain modelling looked like in his day, and Friston would be the best around today. IMO.

    [edit: Friston is a good person to track on this issue because he went from "strong dynamicism" under the influence of Kelso, Freeman, Tononi and others - as evidenced in his early transient papers: http://www.google.co.nz/url?sa=t&rc...yPHWDg&usg=AFQjCNEVEqoGs92MuITgN6oHX7OSPIHKKA -to his current more hybrid approach through close contact with the Gatsby Computational Neuroscience Unit that Geoff Hinton set up next door (http://www.gatsby.ucl.ac.uk/).

    So you literally had conscious hybridisation there :smile:. People set things up so the two camps would be working together to find a unified brain theory.]
     
    Last edited: Nov 23, 2011
  12. Nov 23, 2011 #11

    Pythagorean

    User Avatar
    Gold Member

    And this is really no different than the electrical engineering version of digital. We harness a signal through a component (transistors are particularly good at this) and out the other side comes either a high signal or a low signal.

    In reality, these signals are actually analog. Whether low or high, the values fluctuate around their given value (generally .5 mV for low and 5 V for high when I did the labs myself). We then design components around the transistor that will only interpret a great enough voltage (around 5 V) to work, and will ignore voltages around .5 mV.

    Of course, component can "die hard" at which point they may half-work with a 2.5 V input and you get funky behavior (computer power supplies are notorious for "dying hard" and can sometimes be difficult to trouble shoot, because it will cause other problems in the system that point the finger at other components that rely on the power source).

    This is what I think is important from the medical perspective of neuroscience. If we treat brain components as only digital, we are ignoring medical cases in which the "sorter" that apeiron talks about (that which allows analog to be interpreted digitally) is "broken" and now can't go "all or nothing".
     
  13. Nov 23, 2011 #12

    atyy

    User Avatar
    Science Advisor

    Well, of course analogue systems can do computation, and dynamical systems can be discrete. And optimality is related in the deterministic case to the classical inverse problem of the calculus of variations (Tonti, I think, has the modern view). In the probabilistic case, I'm not so sure what the general statement is, but variational approaches to inference algorithms (an algorithm is a dynamical system, or perhaps more properly, a control system) are widely used.
     
  14. Nov 23, 2011 #13

    apeiron

    User Avatar
    Gold Member

    Right. The material basis of computational devices is fundamentally dynamical. But then there is still the software that runs on top of the hardware. And the software does not care about this analog basis to its execution.

    So this is the tension in the modelling of brains as devices. Is the undeniable underlying bio-dynamicism a bug or a feature?

    Computationalists say it would be a bug, dynamicists like Freeman said it was a feature - essential to how the neural circuits actually work. And once you starting thinking either/or, you go round in circles.

    Instead, you have to have some hybrid concept that sees hardware and software as part of a common system. Like self-organising neural networks, for instance.

    The software has to be making its own hardware in some sense. It now cares about the analog basis to its own existence.
     
  15. Nov 23, 2011 #14
    You guys and gals are way over my head, and I am very thankful for that! It tells me we are in very good hands.

    Would it be OK for me to think that the digital computer/robot will never be better then our brains because we are able to preceive all things in real time (as in, time is an analog that goes wrrrrrrrrrrrrrrrrrrrrrrrrrrrrr....... and time is not a digital tick, tock, tick, tock..... right?). In real time, the robot will miss everything that happens between the rising (or falling) edge of his clock, the kind of stuff we don't miss. Yes, I say this partly "tongue in cheek". gPa
     
  16. Nov 23, 2011 #15

    atyy

    User Avatar
    Science Advisor

    Basically, at the level of accuracy we are interested in, there is no fundamental difference between continuous and discrete. One famous example is the compact disc. Sound is of course analogue. However we only hear up to 20 kHz. The Nyquist sampling theorem guarantees that all the analogue information below 20 kHz can be captured if we sample discretely at 40 kHz. The basic idea is we don't need infinite accuracy. The accuracy we need is not always describable as a frequency cut-off, so the Nyquist theorem doesn't always apply. Nonetheless, the general idea holds.

    Also, your brain does miss things. For example, you can't hear soft sounds that immediately follow a loud sound. So analogue is not necessarily more accurate. Your inability to hear some sounds in a sequence is used in MP3 compression - if you can't hear it, there's no need for the digital system to encode it.
     
    Last edited: Nov 23, 2011
  17. Nov 23, 2011 #16

    apeiron

    User Avatar
    Gold Member

    This is an example of how deeply misleading the computational view of the brain can be - why it is essential probably to start from the biological or dynamical basis.

    Brains don't see anything in real time. It takes about a third of a second for all the nerve traffic to get everywhere it has to for us to see what has just happened. But on the other hand, we already anticipated it would happen (as much as we could) ahead of time. (See the Friston paper, this is what he is modelling).

    So computers work by turning input into output. Brains work by anticipating their input.

    It is the same as I said about imagining the kind of software that can make its own hardware.

    In some ways, brains/biology are so much like computers/machines. And then they are also completely different. :smile:
     
  18. Nov 23, 2011 #17

    Pythagorean

    User Avatar
    Gold Member

    how do you measure better? Computers are much, much faster and efficient at procedural serial operations. Which is why we use them now. Computational physics, computational chemistry, computational neuroscience. They have sped up theoretical discovery and exploration a lot because they can do serial procedural tasks quite quickly. So a numerical solution that would take you years to work out by hand can be done in a couple hours by a computer. Consequentially, the data can be turned to visualizations much faster, and small changes can be made without redoing the whole thing from the start (more procedural task speed).

    A human brain can interpret the context by mixing several inputs and internal assumptions together at once and producing several simultaneous outputs: parallel processing. This leads to many, uncountable advantages over the computer in prediction, but also allows for more error (predictions are more flippant, assumptions are numerous and not many of them recorded for later scrutiny, results can be biased by emotional drives, etc).
     
  19. Nov 23, 2011 #18
    From a programming point of view, my computer will DO EXACTLY what I tell it to do, it will NOT necessarily do what I wanted it to do!

    I know the mistake was my fault, but I am only human. If I give the same instructions to a person they will in all likelihood perceive my intention and not do the absurd. To me, that is better. :)
     
  20. Nov 23, 2011 #19

    Pythagorean

    User Avatar
    Gold Member

    I think it's hard to know whether a computer/robot will ever be better at that kind of context sensing. Humans can design some clever things. There's definitely funding going into developing such types of systems.

    Note that undergraduates (in academia) and greenhorns (on the fishing boat) often do the same kind of thing (interpret instructions too literally) when they are first learning things and following such procedural instructions.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Is the brain analog, digital, both or
  1. The brain (Replies: 1)

  2. The brain (Replies: 2)

  3. Brain parts (Replies: 6)

  4. Brain Parts (Replies: 4)

Loading...