# Equation from neuroscience textbook, regarding modeling neurons

Hello,

I am trying to understand an equation from the textbook "Theory of neural information processing systems" by Coolen, Sullich and Kuhn.
The book states that "the evolution in time of the postsynpatic potential V(t) can be written as a linear differential equation of the form:
$$\frac {d}{dt}V(t) = \frac {d}{dt}V(t) |_{passive} + \frac {d}{dt}V(t) |_{reset}$$"
The book then addreses the passive component:
"The first term represents the passive cable-like electric behaviour of the dendrite:
$$\tau \frac{d}{dt}V(t) |_{passive} = \frac{1}{\rho} \left[ \tilde{I} + \sum_{k = 1}^{N}I_{k}(t) - \rho V(t) \right]$$
In which the two parameters $\tau$ and $\rho$ reflect the electrical properties of the dendrite ($\tau$ being the characteristic time for current changes to affect the voltage, $\rho$ controlling the stationary ratio between voltage and current), and $\tilde{I}$ represents the stationary currents due to the ion pumps."

The summation of Ik(t) is the contribution of currents at the synapses with sending neurons.

I'm already confused. I am not sure what it is actually claiming these parameters and so forth are; if $\tilde{I}$ "represents the stationary current due to the ion pumps", then I take this to mean that these ion pumps work continuously at a set pace, never changing the rate at which they move ions across the membrane, regardless of what else is happening; a stationary current is time-independent. In this case, $\tilde{I}$ is a constant, and it has units of current. $\tau$ and $\rho$ are equally confusing; I think $\tau$ has units of time, and $\rho$ has units of current over voltage. Take the case, given in the textbook, where there are no contributions from other neurons; in this case, Ik(t) = 0, and :

$$\tau \frac{d}{dt}V(t) |_{passive} = \frac{\tilde{I}}{\rho} - V(t)$$

The textbook claims that $\frac{\tilde{I}}{\rho} = V_{rest}$. The resting membrane potential is, essentially, a constant. If both $\tilde{I}$ and Vrest are constants, then $\rho$ is also a constant. This is fine, since $\rho$ is controlling "the stationary ratio between voltage and current". What we then have is:

$$\frac{d}{dt}V(t) = \frac{V_{rest} - V(t)}{\tau}$$

This equation makes no sense to me; I don't see what it is trying to show. The units work out fine at all steps, but I am convinced that what I think the parameters are doing is completely wrong. $\tilde{I}$ for example, if it represents a time-independent current, and it doesn't appear to be a function of something else, it is then a constant, but for a membrane at rest, the net current has to be zero; Vrest is generally not zero, and so $\tilde{I}$ cannot be zero, it is not then the summation of all the ions pumps 'doing their thing' when the membrane is at rest. The same for the other parameters, I do not see where they come from, or what they actually represent; I think $\tau$ is connected to capacitance, but I am grasping at straws, I think.

Sorry that this is a little unstructured. I wanted to include my "attempt" to piece together what is happening. Basically, if anyone can give any guidance on the reasoning behind this equation, that would be much appreciated.

atyy
$$\frac{d}{dt}V(t) = \frac{V_{rest} - V(t)}{\tau}$$

This equation makes no sense to me; I don't see what it is trying to show.

If we start the membrane potential ##V## away from ##V_{rest}##, then ##V## decays towards ##V_{rest}##.

atyy
The units work out fine at all steps, but I am convinced that what I think the parameters are doing is completely wrong. $\tilde{I}$ for example, if it represents a time-independent current, and it doesn't appear to be a function of something else, it is then a constant, but for a membrane at rest, the net current has to be zero; Vrest is generally not zero, and so $\tilde{I}$ cannot be zero, it is not then the summation of all the ions pumps 'doing their thing' when the membrane is at rest.

Yes, the interpretation of the terms seems a bit strange to me. I think it's fine to just treat ##I/\rho## as a single parameter ##V_{rest}##.

Pythagorean
Gold Member
I think $I$ would also include the leak currents (basically, even when channels are closed they have some small permeability, plus whatever makes it through the membrane itself). The overall equation is just basically an equilibrium seeking equation, qualitatively describing the neuron's wish to return to equilibrium potential based on it's charge balance (ion concentration and charge inside vs. ion concentration and charge outside).

atyy
Ignoring signs, which I always get wrong, a simple form in which only the leak conductances are present, and in which the terms have a physical interpretation is:

##C_{M} \frac{d}{dt}V(t) = G_{L} (V(t) - E_{L})##

##V(t)## membrane potential
##C_{M}## membrane capacitance
##G_{L}## membrane conductance with only leak conductances present
##E_{L}## reversal potential of the leak conductance

Roughly, the current ##\bar{I}## in the OP corresponds to the term ##G_{L}E_{L}##, which does have units of current. So the OP is then what is the physical mechanism that establishes ##E_{L}##? As Pythagorean mentions in post #4, over time scales of tens of minutes or situations in which the internal and external ion concentrations are approximately constant, the leak conductance can be thought of as a passive permeability of the membrane to certain ions, so ##E_{L}## will be determined by something like the Nernst equation. Over even longer time scales, or to answer questions as to how the concentration gradient is established in the first place, there are things like the sodium-potassium ATPase, which I don't think the simple equations of the type in the OP are considering.

Pythagorean
Gold Member
It would likely be the Goldman Hodgkin Katz equation (to determine leak reversal potential) rather than the Nernst since several ions are involved in leak current.

• atyy
It would likely be the Goldman Hodgkin Katz equation (to determine leak reversal potential) rather than the Nernst since several ions are involved in leak current.

These single-neuron Hodgin-Huxley derivative equations are nice for the museum of neuroscience, but IMHO they have little practical relevance in the discussion of brain function. In fact, I think it poses more of a red-herring than anything else. The relevant dynamics in brain function is carried out at the mesoscopic population level which has similarities to the "microscopic" individual neuron level BUT also has important differences. Specifically, the wave to pulse conversion in the population follows a non-linear sigmoid curve whereas in the individual neuron the action is much more linear with a "shutoff" at a certain voltage:

http://sulcus.berkeley.edu/freemanwww/manuscripts/id6/92.html

So in my opinion, it's a waste of time to study single neuron dynamics. I think we realized this with the Aplysia hype in the 90's as the route to getting us to understand how memory works with this BS about the long-term potential. That is not how the mammalian brain's memory systems works.

atyy
DiracPool's post #7 is his rather idiosyncratic personal opinion.

• Pythagorean
DiracPool's post #7 is his rather idiosyncratic personal opinion.

That is correct, but that doesn't mean I'm not 100% right, though DiracPool's post #7 is his rather idiosyncratic personal opinion.

Since I now feel I have to defend my idiosyncratic personal opinion (and am happy to do so), here's a little more info supporting that opinion...

Exhibit A--The Human Brain Project (HBP). This is a \$1.3 billion (yes, billion) project funded by the European Union based on on the hype of one guy, Henry Markram, who in my opinion has no idea what he's doing scientifically, although he certainly is a great pitchman, he puts Ron Popeil to shame. Let's just say that the HBP is going nowhere, fast.

https://en.wikipedia.org/wiki/Human_Brain_Project

Remember this guy Markram? This is the same guy that was able to finagle IBM out of their Blue Gene supercomputer back in 2005.

https://en.wikipedia.org/wiki/Blue_Brain_Project

This effort similarly went nowhere. Why? Because modeling brain function at the level of the single neuron is not the most effective way to model cortical dynamics. This is an example of naive "atomistic" thinking that has set back the European Union and Switzerland roughly "two large," and when I say large, I mean large as in Billion.

In fact, there's a big write up of the catastrophe that is the HBP published in the current issue of Scientific American (October 2015):

http://www.scientificamerican.com/article/why-the-human-brain-project-went-wrong-and-how-to-fix-it/

Here are some relevant quotes from the article:

"In 2005 he (Makram) founded the Blue Brain Project, to which IBM contributed a Blue Gene supercomputer, at the Swiss Federal Institute of Technology in Lausanne. The project uses data and software to simulate a small subset of a rat's brain, focusing on a collection of neurons known as a cortical column. But while the venture is generating knowledge about how to mathematically model some parts of the brain's circuitry, critics say the simulation can do very little that is useful or helps us understand how the brain actually works. To this day, Markram has not published a comprehensive paper of Blue Brain's findings in a peer-reviewed journal. Yet he quickly drafted plans to scale up the effort into an even more ambitious endeavor: building a supercomputer simulation of the entire human brain."

Does this sound like a good use of taxpayer money? What's the problem here? The problem is modelling cortical function at the single neuron level. All he's been able to do is simulate a single cortical column of a rat ( or maybe a handful of columns). This tells us nothing about the function of the column that we don't already know from studying the electrophysiology of "actual" cortical columns in vivo.

Another quote from the article:

"Even if it were possible, mainstream neuroscientists say, reengineering the brain at the level of detail envisioned by Markram would tell us nothing about cognition, memory or emotion—just as copying the hardware in a computer, atom by atom, would tell us little about the complex software running on it."

The moral of the story is that, as I said in my previous post, the level at which to model brain function is at the population, not the single neuron. In the model I've been working with, the KIV model (soon to be the KV model), we model roughly on the order of 10,000 cortical columns with a single "node" as represented as single non-linear ODE. We couple together several dozens or hundreds of these ODE's and run the simulation using the Runge-Kutta solver. The results we get very closely match the actual native EEG readings from live animals.

http://www.ncbi.nlm.nih.gov/pubmed/19395236
http://www.ncbi.nlm.nih.gov/pubmed/15011280

atyy
@DiracPool in post #10: The part I don't understand about your defence is: let's suppose the KV model is useful. Does that mean that it is not an interesting question how the KV model can be derived from the connectivity of single neurons? When you reply, keep in mind that this does not necessarily involve a brute force simulation of the brain in excruciating detail. There are many methods of simplification. For example, if one starts with Hodgkin-Huxley type equations, a valid approximation to them in a wide regime is some sort of integrate and fire neuron or something like an Izhikevich or Brette-Gerstner sort of model or even binary neurons. From there macroscopic equations for some aspects of a cortical column can be derived, eg. the work of of van Vreeswijk and Sompolinsky or Nicolas Brunel.

From the point of view of physics, an analogy is the relationship between the microscopic BCS model for superconductivity and the macroscopic Landau-Ginzburg equations. What you seem to be saying is that only the Landau-Ginzburg equations are interesting, while the BCS model, a microscopic model from which the Landau-Ginzburg equations can be derived, is not.

From there macroscopic equations for some aspects of a cortical column can be derived, eg. the work of of van Vreeswijk and Sompolinsky or Nicolas Brunel.

Slow down, atyy, you're dropping a lot of names on me here with that post and I don't have time right now to look all those up. Lol.

The part I don't understand about your defence is: let's suppose the KV model is useful. Does that mean that it is not an interesting question how the KV model can be derived from the connectivity of single neurons?

Interesting, maybe, practical, no. I think my post #10 goes some way in explaining why. That said, I certainly don't think the study of single neurons should be eschewed. Students should definitely learn it, the single neuron is the "atom" of brain science. I'm just saying that it is important that, once you've studied it, you need to realize that it's not going to tell you much about how the brain works. You need to move on. When I was an undergrad we spent a great deal of time studying the individual neuron. Receptor chemistry. the kinetics of Neurotransmitter vesicle release, second messenger systems related to neuromodulators, etc., etc. I'm glad I learned it, but it really didn't play any of a significant role in my later modelling of cortical functions. So my "take home point" is mostly a cautionary tale that (at least in neuroscience) you can waste a lot of time and money in focusing on and modelling a system at the "wrong" or at least non-optimal scale.

a valid approximation to them in a wide regime is some sort of integrate and fire neuron or something

Yeah, that reminds me. I really didn't reach "full disclosure" of my idiosyncratic personal opinion with my previous posts. What I left out was the large number of articles I had to referee over the years that modeled the individual neuron as some sort of Boolean operator or logic gate switch. This is an extreme version of neuroscientific illiteracy and one that's very annoying to me.

atyy
Yeah, that reminds me. I really didn't reach "full disclosure" of my idiosyncratic personal opinion with my previous posts. What I left out was the large number of articles I had to referee over the years that modeled the individual neuron as some sort of Boolean operator or logic gate switch. This is an extreme version of neuroscientific illiteracy and one that's very annoying to me.

Let's be concrete so that we can understand what you mean. Do you disapprove of the analysis using binary neurons in https://web.stanford.edu/group/brainsinsilicon/documents/vanVreeswijk_Sompolinsky_1996.pdf or http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2861483/?

Let's be concrete so that we can understand what you mean. Do you disapprove of the analysis using binary neurons

I'm not sure what you are talking about, atyy. A cursory glance of the articles you posted does not seem to to imply that they are modeling the individual neuron as a Boolean logic gate. The articles I referred that I said were "annoying" to me specifically characterized individual neurons as logic gates under the larger theme, of course, of the brain as a digital computer analogy/metaphor. So I'm not entirely clear how these articles you posted are applicable to the discussion here. Just trying to be concrete.

atyy
I'm not sure what you are talking about, atyy. A cursory glance of the articles you posted does not seem to to imply that they are modeling the individual neuron as a Boolean logic gate. The articles I referred that I said were "annoying" to me specifically characterized individual neurons as logic gates under the larger theme, of course, of the brain as a digital computer analogy/metaphor. So I'm not entirely clear how these articles you posted are applicable to the discussion here. Just trying to be concrete.

The binary neuron outputs 1 above threshold and 0 below threshold, so it is a perceptron, which configured correctly gives a NAND gate eg. http://neuralnetworksanddeeplearning.com/chap1.html.

Anyway, does that mean you don't object to binary neurons?

The binary neuron outputs 1 above threshold and 0 below threshold, so it is a perceptron, which configured correctly gives a NAND gate

The perceptron? Lol. Now you are really dating yourself. I grew up with the "neural network" models of the 80's, complete with "back-propagation" algorithms and "simulated annealing." I even worked with a team at UCSD in the mid-90's doing research on optical holographic memory in crystals, which turned out to be a dead end unfortunately, I actually had big hopes for that project.

Anyway, does that mean you don't object to binary neurons?

I don't object to anything, information is information. Again, I haven't studied the "binary neuron" model so I'm not going to pretend to comment on it. However, what I will say is that, again, the most effective way to model cortical dynamics is at the population level. There's nothing "binary" happening at this level. Sure, at the level of the single neuron, you have the "integrate and fire" paradigm, which when looked at in isolation is sort of a binary operation, but as I said in an earlier post it is a red herring that fools many researchers into thinking neurons are functionally similar to logic gates. While the wave to pulse conversion at the axon hillock produces discrete pulses, the pulse to wave conversion at the apical dendrites does exactly the opposite, they set up EM fields who's voltage distributions form chaotic attractors across the cortex. Locally, in cytoarchitectonically circumscribed regions such as V1 or S1, say, these attractors form spatial patterns related to the particular percept the animal is experiencing at the time. The global oscillations of these attractors are in the gamma range, roughly 40hz. Interareal synchronization of these primary sensory cortices with secondary cortices has a greater lag giving that interareal synchronization in the "beta" range of approx. 12-20 Hz. Finally, hemisphere-wide synchronization occurs in the "alpha" range of roughly 6-12 Hz.