# Neuroscience Neural impulse, references and ideas

Tags:
1. Jul 30, 2013

### fluidistic

Hello people,
I would like some references, be it book or papers about the process of neural impulses at a single neuron level.
My only reference so far is a book from 1974 (Stochastic models in biology) and so is probably outdated despite having around 30 pages on the subject.
The more mathematics there is, the better for me.
Thank you.

2. Jul 31, 2013

### Greg Bernhardt

Have you looked through some current neuroscience text books? There are some cheap used ones on amazon.

3. Jul 31, 2013

### fluidistic

No I haven't, yet. Thanks for the suggestion.
I believe I'm interested in the fire rate of a single neuron for different models (I guess they all include white noise?).
I've read a very bit about the Morris-Lecar model, which seems much more complicated than the model in the book I am using (which is written by Goel by the way). I don't know the name of the model I'm dealing with. It assumes that the voltage of the soma is of the form $\frac{dV}{dt}=-\frac{V}{\tau}+i(t)$ where i(t) is a function that is due to the effect of other neurons on the soma of a particular neuron.
I'm basically at loss.

4. Jul 31, 2013

### atyy

1974 is probably fine, they understood it well then.

http://www.ncbi.nlm.nih.gov/books/NBK10799/ is free and has the basic ideas, but I think not mathematically.

http://icwww.epfl.ch/~gerstner/SPNM/node12.html is free and mathematical. The key equations are the Hodgkin-Huxley equations.

Thomas Weiss's "Cellular Biophysics", Christof Koch's "Biophysics of Computation" or Johnston and Wu's "Foundations of Cellular Neurophysiology" are very good, but not free.

The advances are in either simplified models, or more details about different channels. However, the Hodgkin-Huxley equations are still the basics for most modelling (unless you go to the single channel level, in which case the phenomenological variables in the Hodgkin-Huxley equations are not easily related to stuff you can measure).

Try http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3375691/ or http://www.jneurosci.org/content/31/24/8936.long to see current usage.

5. Jul 31, 2013

### fluidistic

Ok thank you very much atyy, I'm going to have a close look as soon as I can, on all these ressources. It's very nice to know that my book then is not that outdated on the subject.
Meanwhile, I have some questions and doubts. In my book it basically states that the potential of the soma has the form $\frac{dV}{dt}=h(V)+e(V)i(t)$ where i(t) is an input signal, e(V) describes the effect of the input signal (from what I understood in wikipedia, this function would be the synaptic weight?) and h(V) describes the decay of the potential when there's no input signal.
Has this method a name?

Then the book made some simplifications, like that the mean value of the function i(t) would be worth m, h(V) would take the form $-V/\tau$ where tau is the rate of decay constant. Also it assumed that the change in the potential due to the arriving signal is indepent on the current value of the potential and is proportional to the input.
Then the potential takes the form $\frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t)$ where F(t) is a white noise function (worth exactly $\frac{[i(t)-m]}{\sigma}$) with mean 0 and both m and sigma are positive constants. I've solved the equation when the noise is worth 0 and it's an exponential decreasing function ($V(t)=Ae^{-t/\tau}+m\tau$). So if I understand well, this V(t) describes the potential of the soma right after having fired? It has a high initial value and then exponentially decreases toward the mean value of the white noise multiplied by tau.
Later the book states that m>0 is more realisitic than m<0, I can understanding that. But it also states that in the limit when tau tends to infinity (not realistic), this is equivalent to the case of when the time taken for the potential to reach its resting value (m times tau I guess) is much slower than the time between 2 firings.
So if I understand well, a huge value for tau would mean that the neuron fires extremely fast?
How unrealistic is this? Because this makes the math slightly simpler if I take tau that tends to infinity (but still drastically complicated), for a stochastic analysis.

6. Jul 31, 2013

### atyy

The notation is a bit different from what I'm used to, so here's my guess.

For a simple model I usually write CdV/dt = GR(ER-V) + GS(t)(ES-V). This is a model with no voltage dependent conductances, so no spikes, just passive membrane receiving synaptic input.

V = membrane potential
t = time
C = membrane capacitance
GR = resting membrane conductance
ER = resting membrane potential
GS = synaptic conductance
ES = synaptic reversal potential
ES-V is often called the "synaptic driving force"

If I rearrange I get dV/dt = (1/C)[-GR.V + GR.ER +GS(t)(ES-V)]

If I conpare with the equation in your book, I get

h(V) = -GR.V/C
m = GR.ER/C
i(t) = GS(t)/C
e(V) = (ES-V)/C

So i(t) would be the synaptic conductance and e(V) would be the the synaptic driving force. (divided by the membrane capacitance).

With these simplifications, the model seems to be just the same as the simple model I wrote above, so it would have no action potentials, and be just a passive membrane. The solution you wrote is just passive decay to resting membrane potential from an initial condtion in which the membrane had been perturbed from from rest.

7. Jul 31, 2013

### fluidistic

Ok I see, thank you. I start to understand a bit better the book. This potential is the one right after a neuron fired and is valid only between 2 firings if I understand well.
What is not clear to me is that right after the equation $\frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t)$, it says that the somatic potential having the value x at time t knowing it had the value y at time t=0 satisfies the Fokker-Planck equation $\frac{\partial P}{\partial t} = - \frac{\partial }{\partial x} \left [ \left ( m+ \frac{x}{\tau}\right ) P \right ] + \frac{\sigma ^2}{2} \frac{\partial ^2 P}{\partial x^2}$.
I could be mathematically convinced of that, but not by looking at the potential function $V(t)=Ae^{-t/\tau}+m\tau$ which satisfies the noiseless equation. By looking at that function, there's no way there could be another firing, since V(t) decreases toward its resting value for when t tends to infinity.

8. Jul 31, 2013

### atyy

From what you're telling me, I think this model is called the integrate-and-fire neuron http://lcn.epfl.ch/~gerstner/SPNM/node26.html.

No it is not obvious, as it depends on the noise. I don't remember exactly what noise gives that Fokker-Planck equation, but I think what your book has should be similar to http://lcn.epfl.ch/~gerstner/SPNM/node37.html (Eq 5.73 and 5.89)

You can also Google "Langevin equation" and "Diffusion Equation", which I think are mathematically related to your equations, eg. http://dasher.wustl.edu/bio5476/reading/stochastic.pdf (Try the "Smoluchowski Diffusion Equation" on p2).

This is probably the closest: http://alice.nc.huji.ac.il/~netazach/action potential/burkitt 2006.pdf (Eq 15, 25, 26)

Last edited: Jul 31, 2013
9. Jul 31, 2013

### Pythagorean

It looks like an Ornstein-Uhlenbeck process:
http://en.wikipedia.org/wiki/Ornstein–Uhlenbeck_process

in which the noise is a Wiener process. You can model it numerically. Lemons has the most straightforward treament (maybe you'll get lucky with the correct page on google books :)

To me, the Morris-Lecar is at least conceptually easier to understand. I like the description of the neuron's channel populations. Plus, you really need two dimensions to have oscillatory behavior in a deterministic (thus, mechanistic) system, which leads to confusion (as you noted, it looks like the system can never fire again... and it can't as far as true continuous descriptions go. You choose a threshold for it and "artificially" introduce the spike and reset the position to subthreshold.

Standard mathematical analysis of the Morris-Lecar neuron seems like a nightmare though; it's a system you want to understand graphically (and thus numerically) by looking at it's nulclines, fixed points, and typical numerical solutions in different regimes (Fig 3):

http://pegasus.medsci.tokushima-u.ac.jp/~tsumoto/work/nlp_kyutech2003.pdf

If these terms are unfamiliar, there's a book on analyzing these kinds of models by Strogatz called "Nonlinear Dynamics and Chaos". The graphical analysis is in the first couple chapters.

(The Tsumoto paper above has several different papers floating around titled this. I think there's three different sized papers I've found. This is the medium sized one, the smaller one is a symposium, and the I can't seem to find the bigger one that that goes into more detail.)

10. Jul 31, 2013

### Pythagorean

Oh yeah, also a good general book that's completely free and out of print (but still appreciated):

Spikes, Decisions & Actions: Dynamical Foundations of Neuroscience

a free electornic copy is available on the author's website:
http://cvr.yorku.ca/webpages/wilson.htm

I've never read it myself, but I've seen modern authors like Bard Ermentrout (Mathematical Foundations of Neuroscience) suggest it. Another book that was fun was Ihzikevich's "Dynamical Systems in Neuroscience". Izhikevich has his own model that he happily claims is one of the most efficient and biologically plausible neurons out there:

http://wiki.transhumani.com/images/b/b8/Cost_of_neuron_models.jpg
(this is from one of his papers)

11. Jul 31, 2013

### fluidistic

Well the book shows a derivation for the general case $\frac{dx}{dt}=\alpha (x)+ \beta (x)F(t)$, the Fokker-Planck or diffusion equation is satisfied: $\frac{\partial P(x|y,t)}{\partial t}=-\frac{\partial}{\partial x}[a(x)P(x|y,t)]+\frac{1}{2} \frac{\partial ^2 }{\partial x^2}[b(x)P(x|y,t)]$. The demonstration is rather lengthy...

Thanks for all guys.
I have a sort of monograph (~25 pages) to write for in about 1 week and I didn't even start yet (was impossible for me to start before). It's not an obligation but it would be a plus in my case. Since the subject is arbitrary but must be related to stochastic processes, I thought neuron firing was a good choice; it looked and still looks interesting. I didn't realize it was so complicated; nor that I know of other simpler subjects.
I was thinking, maybe if I could take a very simple neuron method and find an analytical solution when I calculate the probability of a neuron firing at time t knowing that it fired at time t=0. However I doubt I can take a simpler case than the one I'm dealing with (when m>0 and tau is worth infinity), yet it yields either a system of coupled PDE's or a single PDE (I'm not even sure of this, I don't understand the book there). And the solution given in the book for $P(x|y,t)$ has a typo I believe, but since he "solved" the equation by looking at a table, I don't even know how to solve the PDE or the system of PDE's (I don't even know what he solved). I must say I'm a bit discouraged and in lack of ideas.

12. Jul 31, 2013

### atyy

Sounds good. The calculation of the average time between spikes (the inter-spike interval) in the integrate-and-fire neuron is a classic calculation, so it's nice.

With $m=0$ and $\tau = \infty$, I think the PDE you wrote in post #7 is the heat equation. Wikipedia gives the solution http://en.wikipedia.org/wiki/Heat_equation.

Last edited: Jul 31, 2013
13. Jul 31, 2013

### fluidistic

Oh I see...
Ah you're right. I'd have to see what are the boundary conditions, etc. But m=0 would mean that the other neurons aren't affecting the particular neuron I consider. In other words the mean value of i(t) would be worth 0, that is, the mean value of the input signal would vanish. So this would be less realistic than m>0, right? Albeit more simple to deal with.

14. Jul 31, 2013

### atyy

I'm not sure, but I think the neuron can still fire. m=0 means the mean input from other neurons is 0, so the neuron receives equal amounts of inputs from neurons that excite it and from neurons that inhibit it. Although the neuron is receiving zero net input, it is still receiving excitatory input. So if the excitatory and inhibitory inputs don't cancel exactly at all times, ie. the variance is large, maybe the neuron can still spike.

But if that doesn't work out, the closed form solution for the mean inter-spike interval is given in Eq 5.104 of http://icwww.epfl.ch/~gerstner/SPNM/node37.html . I don't think the closed form Eq 5.104 is so crucial, the more important bit of reasoning is why $<s> = \int sP_{I_{o}}(s|0) \, ds$ gives the mean interspike interval.

Last edited: Jul 31, 2013
15. Jul 31, 2013

### atyy

If your project allows you to do computational stuff, there's a free and friendly neuron simulator with which you can make integrate-and-fire neurons and inject Ornstein-Uhlenbeck processes etc. http://briansimulator.org/

16. Jul 31, 2013

### fluidistic

Thank you atyy for the numerical simulator. I can't use it for the work I will try to finish on time, but it's good to know about it.
With respect to the case of m=0, you are right I think... And if I'm not wrong, the neuron will fire eventually. It is only for the case m<0 that the probability to fire starts to become less than 1. The book gives a probability to fire for m<0 as $R(B,y)=\exp \{ -2(B-y)|m|/\sigma ^2 \}$ and 1 for m>0. Oddly enough it doesn't say a word for the case m=0, but it's obvious that it's 1, because of the limit of m that tends to 0 from both sides is 1.
So I guess my goal would be to explicitely derive the equations and solve them entirely for the special case m=0 and then show that it's in agreement with the book for when I take the limit of m tending to 0... That's a good idea I believe for the work I'm asked to do (which is nothing serious at all; but they want us to do some math and not copy the book word by word).
I'm going to spend the next days on it.
I'll ask questions in the DE's section, because I have some doubts.
Also I would like to thank Pythagorean once more for the book Spikes, Decisions & Actions. It really seems interesting and nice.

17. Aug 1, 2013

### fluidistic

Also my equation for P(x|y,t) would not be the heat equation I believe, because P is a function of x, y and t. So the part $\frac{\partial ^2 P }{\partial x^2}$ is not the Laplacian which is required in order to be the heat equation. On the top of that, P satisfies another PDE (the Kolmogorov backward equation I think). So I still have a system of 2 PDE's, with 3 boundary conditions.
I've never dealt with that before. I hope it's not that hard to solve.

Edit: Nevermind. When I sum up the 2 PDE's I fall over the heat equation in Cartesian coordinates!

Last edited: Aug 1, 2013
18. Aug 1, 2013

### atyy

I looked up Goel. The mean time to first spike becomes infinite for m=0 and tau=0. Maybe one has to keep tau>0 for something reasonable.

19. Aug 1, 2013

### fluidistic

Oh... Well I took $\tau = \infty$ and m=0. On page 192 he takes $\tau \to \infty$ but doesn't restrict any value for m. On the next page and skipping most of the math he reaches the 2 probabilities I wrote, for m>0 and m<0 but doesn't say anything on the -obvious- case m=0.

20. Aug 1, 2013

### atyy

Yes, but in Goel's Eq 16a, if you set m=0 the mean time to spike is infinite.

I think if m=0 and tau=0, then the equation is dx/dt=i(t), where i(t) is Gaussian white noise with zero mean.

So the membrane voltage will essentially be a random walk which could diverge to -∞.

But if tau=1 (for some units), then dx/dt=-x+i(t), so x has a tendency to relax back to zero in the absence of a stimulus, whether it starts from x>0 or x<0.