Neuroscience Neural impulse, references and ideas

AI Thread Summary
The discussion centers on seeking updated references regarding neural impulses at the single neuron level, with a focus on mathematical models. The original poster mentions a 1974 book and expresses interest in the firing rate of neurons and models like Morris-Lecar. Participants suggest various resources, including free online materials and notable textbooks, emphasizing the significance of the Hodgkin-Huxley equations in modeling. The conversation also delves into the mathematical representation of neuronal potential, discussing the implications of parameters like tau and the role of noise in firing dynamics. Overall, the exchange highlights the complexity of neuronal modeling and the importance of current literature in understanding these processes.
fluidistic
Gold Member
Messages
3,928
Reaction score
272
Hello people,
I would like some references, be it book or papers about the process of neural impulses at a single neuron level.
My only reference so far is a book from 1974 (Stochastic models in biology) and so is probably outdated despite having around 30 pages on the subject.
The more mathematics there is, the better for me.
Thank you.
 
Biology news on Phys.org
Have you looked through some current neuroscience textbooks? There are some cheap used ones on amazon.
 
Greg Bernhardt said:
Have you looked through some current neuroscience textbooks? There are some cheap used ones on amazon.

No I haven't, yet. Thanks for the suggestion.
I believe I'm interested in the fire rate of a single neuron for different models (I guess they all include white noise?).
I've read a very bit about the Morris-Lecar model, which seems much more complicated than the model in the book I am using (which is written by Goel by the way). I don't know the name of the model I'm dealing with. It assumes that the voltage of the soma is of the form ##\frac{dV}{dt}=-\frac{V}{\tau}+i(t)## where i(t) is a function that is due to the effect of other neurons on the soma of a particular neuron.
I'm basically at loss.
 
1974 is probably fine, they understood it well then.

http://www.ncbi.nlm.nih.gov/books/NBK10799/ is free and has the basic ideas, but I think not mathematically.

http://icwww.epfl.ch/~gerstner/SPNM/node12.html is free and mathematical. The key equations are the Hodgkin-Huxley equations.

Thomas Weiss's "Cellular Biophysics", Christof Koch's "Biophysics of Computation" or Johnston and Wu's "Foundations of Cellular Neurophysiology" are very good, but not free.

The advances are in either simplified models, or more details about different channels. However, the Hodgkin-Huxley equations are still the basics for most modelling (unless you go to the single channel level, in which case the phenomenological variables in the Hodgkin-Huxley equations are not easily related to stuff you can measure).

Try http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3375691/ or http://www.jneurosci.org/content/31/24/8936.long to see current usage.
 
  • Like
Likes 1 person
Ok thank you very much atyy, I'm going to have a close look as soon as I can, on all these ressources. It's very nice to know that my book then is not that outdated on the subject.
Meanwhile, I have some questions and doubts. In my book it basically states that the potential of the soma has the form ##\frac{dV}{dt}=h(V)+e(V)i(t)## where i(t) is an input signal, e(V) describes the effect of the input signal (from what I understood in wikipedia, this function would be the synaptic weight?) and h(V) describes the decay of the potential when there's no input signal.
Has this method a name?

Then the book made some simplifications, like that the mean value of the function i(t) would be worth m, h(V) would take the form ##-V/\tau## where tau is the rate of decay constant. Also it assumed that the change in the potential due to the arriving signal is indepent on the current value of the potential and is proportional to the input.
Then the potential takes the form ##\frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t)## where F(t) is a white noise function (worth exactly ##\frac{[i(t)-m]}{\sigma}##) with mean 0 and both m and sigma are positive constants. I've solved the equation when the noise is worth 0 and it's an exponential decreasing function (##V(t)=Ae^{-t/\tau}+m\tau##). So if I understand well, this V(t) describes the potential of the soma right after having fired? It has a high initial value and then exponentially decreases toward the mean value of the white noise multiplied by tau.
Later the book states that m>0 is more realisitic than m<0, I can understanding that. But it also states that in the limit when tau tends to infinity (not realistic), this is equivalent to the case of when the time taken for the potential to reach its resting value (m times tau I guess) is much slower than the time between 2 firings.
So if I understand well, a huge value for tau would mean that the neuron fires extremely fast?
How unrealistic is this? Because this makes the math slightly simpler if I take tau that tends to infinity (but still drastically complicated), for a stochastic analysis.
 
fluidistic said:
Ok thank you very much atyy, I'm going to have a close look as soon as I can, on all these ressources. It's very nice to know that my book then is not that outdated on the subject.
Meanwhile, I have some questions and doubts. In my book it basically states that the potential of the soma has the form ##\frac{dV}{dt}=h(V)+e(V)i(t)## where i(t) is an input signal, e(V) describes the effect of the input signal (from what I understood in wikipedia, this function would be the synaptic weight?) and h(V) describes the decay of the potential when there's no input signal.
Has this method a name?

The notation is a bit different from what I'm used to, so here's my guess.

For a simple model I usually write CdV/dt = GR(ER-V) + GS(t)(ES-V). This is a model with no voltage dependent conductances, so no spikes, just passive membrane receiving synaptic input.

V = membrane potential
t = time
C = membrane capacitance
GR = resting membrane conductance
ER = resting membrane potential
GS = synaptic conductance
ES = synaptic reversal potential
ES-V is often called the "synaptic driving force"

If I rearrange I get dV/dt = (1/C)[-GR.V + GR.ER +GS(t)(ES-V)]

If I conpare with the equation in your book, I get

h(V) = -GR.V/C
m = GR.ER/C
i(t) = GS(t)/C
e(V) = (ES-V)/C

So i(t) would be the synaptic conductance and e(V) would be the the synaptic driving force. (divided by the membrane capacitance).

fluidistic said:
Then the book made some simplifications, like that the mean value of the function i(t) would be worth m, h(V) would take the form ##-V/\tau## where tau is the rate of decay constant. Also it assumed that the change in the potential due to the arriving signal is indepent on the current value of the potential and is proportional to the input.
Then the potential takes the form ##\frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t)## where F(t) is a white noise function (worth exactly ##\frac{[i(t)-m]}{\sigma}##) with mean 0 and both m and sigma are positive constants. I've solved the equation when the noise is worth 0 and it's an exponential decreasing function (##V(t)=Ae^{-t/\tau}+m\tau##). So if I understand well, this V(t) describes the potential of the soma right after having fired? It has a high initial value and then exponentially decreases toward the mean value of the white noise multiplied by tau.
Later the book states that m>0 is more realisitic than m<0, I can understanding that. But it also states that in the limit when tau tends to infinity (not realistic), this is equivalent to the case of when the time taken for the potential to reach its resting value (m times tau I guess) is much slower than the time between 2 firings.
So if I understand well, a huge value for tau would mean that the neuron fires extremely fast?
How unrealistic is this? Because this makes the math slightly simpler if I take tau that tends to infinity (but still drastically complicated), for a stochastic analysis.

With these simplifications, the model seems to be just the same as the simple model I wrote above, so it would have no action potentials, and be just a passive membrane. The solution you wrote is just passive decay to resting membrane potential from an initial condtion in which the membrane had been perturbed from from rest.
 
Ok I see, thank you. I start to understand a bit better the book. This potential is the one right after a neuron fired and is valid only between 2 firings if I understand well.
What is not clear to me is that right after the equation ##\frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t)##, it says that the somatic potential having the value x at time t knowing it had the value y at time t=0 satisfies the Fokker-Planck equation ##\frac{\partial P}{\partial t} = - \frac{\partial }{\partial x} \left [ \left ( m+ \frac{x}{\tau}\right ) P \right ] + \frac{\sigma ^2}{2} \frac{\partial ^2 P}{\partial x^2}##.
I could be mathematically convinced of that, but not by looking at the potential function ##V(t)=Ae^{-t/\tau}+m\tau## which satisfies the noiseless equation. By looking at that function, there's no way there could be another firing, since V(t) decreases toward its resting value for when t tends to infinity.
 
fluidistic said:
Ok I see, thank you. I start to understand a bit better the book. This potential is the one right after a neuron fired and is valid only between 2 firings if I understand well.

From what you're telling me, I think this model is called the integrate-and-fire neuron http://lcn.epfl.ch/~gerstner/SPNM/node26.html.

fluidistic said:
What is not clear to me is that right after the equation ##\frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t)##, it says that the somatic potential having the value x at time t knowing it had the value y at time t=0 satisfies the Fokker-Planck equation ##\frac{\partial P}{\partial t} = - \frac{\partial }{\partial x} \left [ \left ( m+ \frac{x}{\tau}\right ) P \right ] + \frac{\sigma ^2}{2} \frac{\partial ^2 P}{\partial x^2}##.
I could be mathematically convinced of that, but not by looking at the potential function ##V(t)=Ae^{-t/\tau}+m\tau## which satisfies the noiseless equation. By looking at that function, there's no way there could be another firing, since V(t) decreases toward its resting value for when t tends to infinity.

No it is not obvious, as it depends on the noise. I don't remember exactly what noise gives that Fokker-Planck equation, but I think what your book has should be similar to http://lcn.epfl.ch/~gerstner/SPNM/node37.html (Eq 5.73 and 5.89)

You can also Google "Langevin equation" and "Diffusion Equation", which I think are mathematically related to your equations, eg. http://dasher.wustl.edu/bio5476/reading/stochastic.pdf (Try the "Smoluchowski Diffusion Equation" on p2).

This is probably the closest: http://alice.nc.huji.ac.il/~netazach/action%20potential/burkitt%202006.pdf (Eq 15, 25, 26)
 
Last edited:
It looks like an Ornstein-Uhlenbeck process:
http://en.wikipedia.org/wiki/Ornstein–Uhlenbeck_process

in which the noise is a Wiener process. You can model it numerically. Lemons has the most straightforward treament (maybe you'll get lucky with the correct page on google books :)

http://books.google.ca/books/about/...ic_Processes.html?id=Uw6YDkd_CXcC&redir_esc=y

fluidistic said:
I've read a very bit about the Morris-Lecar model, which seems much more complicated than the model in the book I am using

To me, the Morris-Lecar is at least conceptually easier to understand. I like the description of the neuron's channel populations. Plus, you really need two dimensions to have oscillatory behavior in a deterministic (thus, mechanistic) system, which leads to confusion (as you noted, it looks like the system can never fire again... and it can't as far as true continuous descriptions go. You choose a threshold for it and "artificially" introduce the spike and reset the position to subthreshold.

Standard mathematical analysis of the Morris-Lecar neuron seems like a nightmare though; it's a system you want to understand graphically (and thus numerically) by looking at it's nulclines, fixed points, and typical numerical solutions in different regimes (Fig 3):

http://pegasus.medsci.tokushima-u.ac.jp/~tsumoto/work/nlp_kyutech2003.pdf

If these terms are unfamiliar, there's a book on analyzing these kinds of models by Strogatz called "Nonlinear Dynamics and Chaos". The graphical analysis is in the first couple chapters.

(The Tsumoto paper above has several different papers floating around titled this. I think there's three different sized papers I've found. This is the medium sized one, the smaller one is a symposium, and the I can't seem to find the bigger one that that goes into more detail.)
 
  • #10
Oh yeah, also a good general book that's completely free and out of print (but still appreciated):

Spikes, Decisions & Actions: Dynamical Foundations of Neuroscience

a free electornic copy is available on the author's website:
http://cvr.yorku.ca/webpages/wilson.htm

I've never read it myself, but I've seen modern authors like Bard Ermentrout (Mathematical Foundations of Neuroscience) suggest it. Another book that was fun was Ihzikevich's "Dynamical Systems in Neuroscience". Izhikevich has his own model that he happily claims is one of the most efficient and biologically plausible neurons out there:

http://wiki.transhumani.com/images/b/b8/Cost_of_neuron_models.jpg
(this is from one of his papers)
 
  • #11
atyy said:
No it is not obvious, as it depends on the noise. I don't remember exactly what noise gives that Fokker-Planck equation, but I think what your book has should be similar to http://lcn.epfl.ch/~gerstner/SPNM/node37.html (Eq 5.73 and 5.89)
Well the book shows a derivation for the general case ##\frac{dx}{dt}=\alpha (x)+ \beta (x)F(t)##, the Fokker-Planck or diffusion equation is satisfied: ##\frac{\partial P(x|y,t)}{\partial t}=-\frac{\partial}{\partial x}[a(x)P(x|y,t)]+\frac{1}{2} \frac{\partial ^2 }{\partial x^2}[b(x)P(x|y,t)] ##. The demonstration is rather lengthy...

Thanks for all guys.
I have a sort of monograph (~25 pages) to write for in about 1 week and I didn't even start yet (was impossible for me to start before). It's not an obligation but it would be a plus in my case. Since the subject is arbitrary but must be related to stochastic processes, I thought neuron firing was a good choice; it looked and still looks interesting. I didn't realize it was so complicated; nor that I know of other simpler subjects.
I was thinking, maybe if I could take a very simple neuron method and find an analytical solution when I calculate the probability of a neuron firing at time t knowing that it fired at time t=0. However I doubt I can take a simpler case than the one I'm dealing with (when m>0 and tau is worth infinity), yet it yields either a system of coupled PDE's or a single PDE (I'm not even sure of this, I don't understand the book there). And the solution given in the book for ##P(x|y,t)## has a typo I believe, but since he "solved" the equation by looking at a table, I don't even know how to solve the PDE or the system of PDE's (I don't even know what he solved). I must say I'm a bit discouraged and in lack of ideas.
 
  • #12
fluidistic said:
I was thinking, maybe if I could take a very simple neuron method and find an analytical solution when I calculate the probability of a neuron firing at time t knowing that it fired at time t=0. However I doubt I can take a simpler case than the one I'm dealing with (when m>0 and tau is worth infinity), yet it yields either a system of coupled PDE's or a single PDE (I'm not even sure of this, I don't understand the book there). And the solution given in the book for ##P(x|y,t)## has a typo I believe, but since he "solved" the equation by looking at a table, I don't even know how to solve the PDE or the system of PDE's (I don't even know what he solved).

Sounds good. The calculation of the average time between spikes (the inter-spike interval) in the integrate-and-fire neuron is a classic calculation, so it's nice.

With ##m=0## and ##\tau = \infty##, I think the PDE you wrote in post #7 is the heat equation. Wikipedia gives the solution http://en.wikipedia.org/wiki/Heat_equation.
 
Last edited:
  • #13
atyy said:
Sounds good. The calculation of the average time between spikes (the inter-spike interval) in the integrate-and-fire neuron is a classic calculation, so it's nice.
Oh I see...
atyy said:
With ##m=0## and ##\tau = \infty##, I think the PDE you wrote in post #7 is the heat equation. Wikipedia gives the solution http://en.wikipedia.org/wiki/Heat_equation
Ah you're right. I'd have to see what are the boundary conditions, etc. But m=0 would mean that the other neurons aren't affecting the particular neuron I consider. In other words the mean value of i(t) would be worth 0, that is, the mean value of the input signal would vanish. So this would be less realistic than m>0, right? Albeit more simple to deal with.
 
  • #14
fluidistic said:
Oh I see...

Ah you're right. I'd have to see what are the boundary conditions, etc. But m=0 would mean that the other neurons aren't affecting the particular neuron I consider. In other words the mean value of i(t) would be worth 0, that is, the mean value of the input signal would vanish. So this would be less realistic than m>0, right? Albeit more simple to deal with.

I'm not sure, but I think the neuron can still fire. m=0 means the mean input from other neurons is 0, so the neuron receives equal amounts of inputs from neurons that excite it and from neurons that inhibit it. Although the neuron is receiving zero net input, it is still receiving excitatory input. So if the excitatory and inhibitory inputs don't cancel exactly at all times, ie. the variance is large, maybe the neuron can still spike.

But if that doesn't work out, the closed form solution for the mean inter-spike interval is given in Eq 5.104 of http://icwww.epfl.ch/~gerstner/SPNM/node37.html . I don't think the closed form Eq 5.104 is so crucial, the more important bit of reasoning is why ##<s> = \int sP_{I_{o}}(s|0) \, ds## gives the mean interspike interval.
 
Last edited:
  • #15
If your project allows you to do computational stuff, there's a free and friendly neuron simulator with which you can make integrate-and-fire neurons and inject Ornstein-Uhlenbeck processes etc. http://briansimulator.org/
 
  • #16
Thank you atyy for the numerical simulator. I can't use it for the work I will try to finish on time, but it's good to know about it.
With respect to the case of m=0, you are right I think... And if I'm not wrong, the neuron will fire eventually. It is only for the case m<0 that the probability to fire starts to become less than 1. The book gives a probability to fire for m<0 as ##R(B,y)=\exp \{ -2(B-y)|m|/\sigma ^2 \}## and 1 for m>0. Oddly enough it doesn't say a word for the case m=0, but it's obvious that it's 1, because of the limit of m that tends to 0 from both sides is 1.
So I guess my goal would be to explicitely derive the equations and solve them entirely for the special case m=0 and then show that it's in agreement with the book for when I take the limit of m tending to 0... That's a good idea I believe for the work I'm asked to do (which is nothing serious at all; but they want us to do some math and not copy the book word by word).
I'm going to spend the next days on it.
I'll ask questions in the DE's section, because I have some doubts.
Also I would like to thank Pythagorean once more for the book Spikes, Decisions & Actions. It really seems interesting and nice.
 
  • #17
Also my equation for P(x|y,t) would not be the heat equation I believe, because P is a function of x, y and t. So the part ##\frac{\partial ^2 P }{\partial x^2}## is not the Laplacian which is required in order to be the heat equation. On the top of that, P satisfies another PDE (the Kolmogorov backward equation I think). So I still have a system of 2 PDE's, with 3 boundary conditions.
I've never dealt with that before. I hope it's not that hard to solve.

Edit: Nevermind. When I sum up the 2 PDE's I fall over the heat equation in Cartesian coordinates!
 
Last edited:
  • #18
I looked up Goel. The mean time to first spike becomes infinite for m=0 and tau=0. Maybe one has to keep tau>0 for something reasonable.
 
  • #19
atyy said:
I looked up Goel. The mean time to first spike becomes infinite for m=0 and tau=0. Maybe one has to keep tau>0 for something reasonable.

Oh... Well I took ##\tau = \infty## and m=0. On page 192 he takes ##\tau \to \infty## but doesn't restrict any value for m. On the next page and skipping most of the math he reaches the 2 probabilities I wrote, for m>0 and m<0 but doesn't say anything on the -obvious- case m=0.
 
  • #20
fluidistic said:
Oh... Well I took ##\tau = \infty## and m=0. On page 192 he takes ##\tau \to \infty## but doesn't restrict any value for m. On the next page and skipping most of the math he reaches the 2 probabilities I wrote, for m>0 and m<0 but doesn't say anything on the -obvious- case m=0.

Yes, but in Goel's Eq 16a, if you set m=0 the mean time to spike is infinite.

I think if m=0 and tau=0, then the equation is dx/dt=i(t), where i(t) is Gaussian white noise with zero mean.

So the membrane voltage will essentially be a random walk which could diverge to -∞.

But if tau=1 (for some units), then dx/dt=-x+i(t), so x has a tendency to relax back to zero in the absence of a stimulus, whether it starts from x>0 or x<0.
 
  • #21
atyy said:
Yes, but in Goel's Eq 16a, if you set m=0 the mean time to spike is infinite.
Oh yes, you are right. However the probability to fire would still be 1. I interpret this as that the neuron will take an infinite amount of time and eventually fire, if that makes any sense. When m is negative, the probability to fire is lesser than 1, even after waiting an infinite amount of time.
atyy said:
I think if m=0 and tau=0, then the equation is dx/dt=i(t), where i(t) is Gaussian white noise with zero mean.

So the membrane voltage will essentially be a random walk which could diverge to -∞.

But if tau=1 (for some units), then dx/dt=-x+i(t), so x has a tendency to relax back to zero in the absence of a stimulus, whether it starts from x>0 or x<0.

If tau is worth 0 I don't really see how you obtain the eq. dx/dt=i(t). Originally the eq. is ##\frac{dx}{dt}=-\frac{x}{\tau}+i(t)##. Tau equal to 0 seems problematic to me, as the first term blows out.
 
  • #22
fluidistic said:
Oh yes, you are right. However the probability to fire would still be 1. I interpret this as that the neuron will take an infinite amount of time and eventually fire, if that makes any sense. When m is negative, the probability to fire is lesser than 1, even after waiting an infinite amount of time.

Yes, that makes sense.

fluidistic said:
If tau is worth 0 I don't really see how you obtain the eq. dx/dt=i(t). Originally the eq. is ##\frac{dx}{dt}=-\frac{x}{\tau}+i(t)##. Tau equal to 0 seems problematic to me, as the first term blows out.

Oops, I meant ##\tau=\infty##.
 
  • #23
atyy said:
Yes, that makes sense.
Ok good to know!



Oops, I meant ##\tau=\infty##.
Oh :)

atyy said:
I think if m=0 and ##{\tau=\color{red} \infty}##, then the equation is dx/dt=i(t), where i(t) is Gaussian white noise with zero mean.

So the membrane voltage will essentially be a random walk which could diverge to -∞.
The divergence at -infinity for the voltage is removed with the boundary conditions on P(x|y,t). The book gives ##P(-\infty |y,t)=0## if I don't misunderstand it (page 192).
 
  • #24
Worried about my understanding

I'm worried, I seem to misunderstand something. In post #5 when I solved the equation ##\frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t)## for the case ##F(t)=0## (no white noise), I reached a solution for V(t) that decreases with time.
And as far as I know, for the integrate-and-fire model (and most other models?) the potential should increase with time after it has been reset; until it reaches the threshold voltage and fire again.
So in the model of the book, it seems that the neuron never fires after it has been reset?
 
  • #25
fluidistic said:
I'm worried, I seem to misunderstand something. In post #5 when I solved the equation ##\frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t)## for the case ##F(t)=0## (no white noise), I reached a solution for V(t) that decreases with time.
And as far as I know, for the integrate-and-fire model (and most other models?) the potential should increase with time after it has been reset; until it reaches the threshold voltage and fire again.
So in the model of the book, it seems that the neuron never fires after it has been reset?

Maybe you should have ##mt## instead of ##m\tau## in the second term?

##V(t)=Ae^{-t/\tau}+mt##
 
  • #27
atyy said:
Maybe you should have ##mt## instead of ##m\tau## in the second term?

##V(t)=Ae^{-t/\tau}+mt##
I've recheked my math and even tried wolfram alpha (http://www.wolframalpha.com/input/?i=dV/dt=-V(t)/a+m), apparently I made no mistake there.

atyy said:
BTW, the model for ##\tau = \infty## is the Gerstein-Mandelbrot model of 1964.

Their paper is available at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1367440/pdf/biophysj00646-0045.pdf .

They discuss the ##m=0## (using ##m## from Goel's notation, not theirs) case on p50, just after their Eq 6.
Thank you very, very much for this reference. This is really helpful.
 
  • #28
fluidistic said:
I've recheked my math and even tried wolfram alpha (http://www.wolframalpha.com/input/?i=dV/dt=-V(t)/a+m), apparently I made no mistake there.

So I guess when there is no noise, the solutions with finite ##\tau## and ##\tau=\infty## are different.

With finite ##\tau##, V=m at steady state. So with m less than threshold the neuron fires only because of the added noise which causes the membrane potential to cross the threshold randomly.

With ##\tau=\infty##, dV/dt=m, and the solution is V=mt+C, so even though m is less than threshold, it will eventually reach threshold and fire if m>0.
 
Last edited:
  • #29
atyy said:
So I guess when there is no noise, the solutions with finite ##\tau## and ##\tau=\infty## are different.

With finite ##\tau##, V=m at steady state. So with m less than threshold the neuron fires only because of the added noise which causes the membrane potential to cross the threshold randomly.

With ##\tau=\infty##, dV/dt=m, and the solution is V=mt+C, so even though m is less than threshold, it will eventually reach threshold and fire if m>0.

I see, thank you.

By the way, I checked that at least for m=0, eq. 10 does not solve eq. 9 (pages 192-193) as claimed. I'm having a hard time in finding the solution to eq.9 for when m=0. I doubt it is even true for ##m\neq 0##. Also I think there's a typo in eq.11, there's a missing factor of ##1/\sqrt t## if the table 3.4 at page 52 is right.
 
  • #30
fluidistic said:
I see, thank you.

By the way, I checked that at least for m=0, eq. 10 does not solve eq. 9 (pages 192-193) as claimed. I'm having a hard time in finding the solution to eq.9 for when m=0. I doubt it is even true for ##m\neq 0##. Also I think there's a typo in eq.11, there's a missing factor of ##1/\sqrt t## if the table 3.4 at page 52 is right.

If you set m=0 in Eq 9, I think you get the one dimensional heat equation
http://mathworld.wolfram.com/HeatConductionEquation.html
 
  • #31
atyy said:
If you set m=0 in Eq 9, I think you get the one dimensional heat equation
http://mathworld.wolfram.com/HeatConductionEquation.html

Yes you are right. The problem I'm having is with the boundary conditions; I don't really know what they are. The article you gave me (Gerstein-Mandelbrot) seems to deal with the heat equation at page 52 for when their c is worth 0 (Goel's m if I understood well).
However in Goel, the diffusion equation (eq.9) of page 192 becomes the heat equation indeed and both the eq. 6a and 6b (backward diffusion equation, the Kolmogorov one) also becomes the heat equation.
But P is a function of x, y and t. The 2 PDE's are then ##\frac{\partial P}{\partial t} = \frac{\sigma ^2}{2} \frac{\partial ^2 P}{\partial x^2}## and ##\frac{\partial P}{\partial t} = \frac{\sigma ^2}{2} \frac{\partial ^2 P}{\partial y^2}##. Subject to the boundary conditions of page 192 (8a, 8b and 8c, yet I have still doubts that Goel wrote well the 8b one).
I wanted to kind of "cheat", look up the solution in Goel for the general case m not necessarily 0 and set up m=0 in his solution. However, the resulting solution does not satisfy the diffusion equation.
Here is eq. 10 of page 193 with m=0. ##P(x,y,t)=\frac{1}{\sqrt{2\pi}\sigma} \left [ \exp \{ -\frac{(x-y)^2}{2\sigma ^2 t} \} - \exp \{ - \frac{(x+y-2B)^2}{2\sigma ^2 t} \} \right ]## but as I said, I tried to see if it solves the heat equation and it does not.
I tried with the solution ##\frac{1}{\sqrt{2\pi t}\sigma} \left [ \exp \{ -\frac{(x-y)^2}{2\sigma ^2 t} \} - \exp \{ - \frac{(x+y-2B)^2}{2\sigma ^2 t} \} \right ]## but it also fails to solve the heat equation. That's why I'm starting to believe that the solution Goel gives for the general case of m doesn't even work, because it fails for at least m=0.

In the article of Gerstein-Mandelbrot, it gives a solution of ##I(z_0,\tau)=(4\pi )^{-1/2}z_0 \tau ^{-3/2}\exp \{ -\frac{z_0}{4\tau} \}## where I believe G-M's tau is equivalent to Goel's t and G-M's ##z_0## is equivalent to Goel's x-y or something like that, I am not really sure (but the threshold potential B must appear somewhere... maybe in ##z_0##?).

P.S.:Also notice that apparently for the 2 PDE's, the second derivative of P with respect to x is equal to the second derivative of P with respect to y; at least if Goel's solution works (which does not seem to do, but maybe the real solution has this property). So one could just add both equation to fall over a single PDE. Guess what this PDE is? The heat equation for either x or y. That is, ##\frac{\sigma ^2}{2} \frac{\partial P ^2}{\partial x^2} = \frac{\partial P}{\partial t}##. (I have started a thread on this topic at https://www.physicsforums.com/showthread.php?p=4461916#post4461916).
1 more comment: by looking at either G-M's solution or Goel, it doesn't seem like the solution is separable. So separation of variables might not be the way to go. That may due to the weird boundary conditions, I'm not really sure.
 
  • #32
At the bottom of G-M's p49 they say that their ##z_{o}## is the distance between the resting potential and threshold. In their case, the resting potential is the potential immediately after a spike, because they say at the bottom of p48 "6. Immediately after the state point has attained the threshold and caused the production of an action potential, it returns to the resting potential, only to begin again on its random walk."
 
  • #33
atyy said:
At the bottom of G-M's p49 they say that their ##z_{o}## is the distance between the resting potential and threshold. In their case, the resting potential is the potential immediately after a spike, because they say at the bottom of p48 "6. Immediately after the state point has attained the threshold and caused the production of an action potential, it returns to the resting potential, only to begin again on its random walk."

I see. If I understand well, ##z_0## corresponds to B-m which is in my case worth B. I don't really see how to obtain P(x,y,t) from there...

EDIT: Nevermind! The answer given in Goel works (with the 1/ sqrt t typo fixed)! I had to redo the algebra like 3 times and I've checked with the program Maxima that all is correct... phew... Hurray.
 
Last edited:
  • #34
Nice! I'll have to try Maxima some time.
 
  • #35
atyy said:
Nice! I'll have to try Maxima some time.

I've been induced in error because of the notation of Maxima, I might post a screenshot later if I have the time, to show you. Other than that, it seems pretty nice.

By the way I've been reading a bit the book of "Spikes, decisions and actions" by H.R. Wilson. At page 1 it's written that there's around 10^12 neurons and 10^15 synapses in the human brain. However I thought and most other sources state that there are "only" 100 billions neurons in the whole nervous system, so 10^11 neurons. They also say that there are around 10^14 to 10^15 synapses. So who's right?
If I understand well this mean that there are about 1000 to 10000 synapses per neuron in average? So, if I'm still right, more than 1000 to 10000 dentrites per neuron in average?
 
  • #36
Neuron counts are estimates. I usually hear 100 billion. It's probably give or take an order of magnitude anyway...

There can be multiple synapses on a single dendritic process.
 
  • Like
Likes 1 person
  • #37
Pythagorean said:
Neuron counts are estimates. I usually hear 100 billion. It's probably give or take an order of magnitude anyway...

There can be multiple synapses on a single dendritic process.

I see, thanks for the information.
 
  • #39
Makes you wonder if the things we consider intelligent only belong to a limited region of brain.

I wonder how that patient would deal with outrunning tigers, finding water, and hunting mammoth.
 
  • #40
Hello guys,
I'm retaking the "work" I started months ago, my goal is to finish it within the next seven days more or less.
I am reading about the Integrate-and-fire model and it is not clear to me whether Lapicque's (I guess he's the first who "invented" this model?) paper was a leaky IAF model or simply an IAF model.
Some references seem to claim that the model had a capacitor with a resistor but that the model was not a leaky one. I do not understand how a model can have a resistor and not be a leaky model.
N.Brunel and M.C.W. van Rossum said:
Lapicque starts his paper by arguing that nerve membranes
are nothing but semipermeable, polarizable membranes.
Polarizable membranes can in first approximation be mod-
eled as a capacitor with leak. The paper then compares his
data to both an RC model of the membrane and a heuristic
law of excitability obtained by Weiss (Irnich 2002).
but later in the same paper one reads
Richard Stein introduced in 1965 a leaky integrate-and-fire model with random Poisson excitatory and inhibitory inputs
(Stein 1965).
Another reference claims:
A.N.Burkitt said:
Lapicque (1907) put forward a model of the neuron mem-
brane potential in terms of an electric circuit consisting of
a resistor and capacitor in parallel, representing the leakage
and capacitance of the membrane.
so it seems what we would call a leaky Integrate-and-fire model, or I'm missing something?

All in all, I do not know how to introduce the Integrate-and-fire model in my document. Is it just a parallel RC circuit or something like wikipedia claims to be, i.e.
Wiki the Great said:
One of the earliest models of a neuron was first investigated in 1907 by Louis Lapicque.[1] A neuron is represented in time by I(t)=C_m \frac{dV_m (t)}{dt}
where there is no mention of any resistor.
 
  • #41
I have re-read my work and in post 5 I think I made a worrying mistake. I claimed that the DE \frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t) became \frac{dV}{dt}=-\frac{V}{\tau} +m when the white noise is removed, but I think this is false. Since m is the mean of the white noise, if I remove it, m should be worth 0.
So that the DE becomes \frac{dV}{dt}=-\frac{V}{\tau} and the solution is simply V(t)=Ke^{-t/\tau}, a decaying to 0 potential function.
 
  • #42
Hello guys, I have another doubt/question. I would like to know whether in the Integrate-and-fire model, if one has a periodic input function, do one get a periodic somatic potential? It is a doubt because I think that the answer is yes but I haven't seen any demonstration so far.

Just to put water in the bath, here is the DE: a\frac{dV}{dt}=-V+RI(t) and we consider I(t) as a periodic function. Does this imply that v will be periodic?

I am not posting this into the DE forum because I think that one of you may know the answer since it may be a very trivial result in neuronal models.

P.S.:a is a constant, V is the somatic potential, R is the resistor equivalent of the neuron and t is time of course.
 
  • #43
Technically, yes, but for applications purposes, it depends on "a" and R and the frequency of periodic function I(t). In one regime, the system's intrinsic behavior dominates (a transient to the steady state with little oscillations around it). In the other, the oscillations dominate.

Basically, because the membrane is a capacitor, it's function in an oscillating circuit depends on the capacitive reactance, which is proportional to R/(w*C), where R is the resistance, w is the frequency of your oscillations and C is the capacitance. In your system, it's R/(w*a). That "a" basically sets the time scale of the intrinsic behavior (which is just a linear V).
 
  • Like
Likes 1 person
  • #44
Pythagorean said:
Technically, yes, but for applications purposes, it depends on "a" and R and the frequency of periodic function I(t). In one regime, the system's intrinsic behavior dominates (a transient to the steady state with little oscillations around it). In the other, the oscillations dominate.

Basically, because the membrane is a capacitor, it's function in an oscillating circuit depends on the capacitive reactance, which is proportional to R/(w*C), where R is the resistance, w is the frequency of your oscillations and C is the capacitance. In your system, it's R/(w*a). That "a" basically sets the time scale of the intrinsic behavior (which is just a linear V).

Thank you, that makes sense.
 
  • #45
I've got other questions.
1)What are the main difference(s) between Morris-Lecar and FitzHugh-Nagumo models? They both seem 2 dimensional simplifications of the 4 dimensional Hodgkin-Huxley model. Do their goal differ and in what exactly/more or less?
2)As I understand it, the Gerstein-Mandelbrot's model is just a special case of the Integrate-and-fire model? It appears when the synaptic input signal is considered as a white noise? Does my understanding looks correct?

3)One reads in a Burkit's paper (A review of the integrate-and-fire neuron model: I. Homogeneous
synaptic input), that, if I understand well, for the Gerstein-Mandelbrot's model with positive drift (this information is not mentioned but this is the only case where it makes sense I believe), the density of the first passage time is f _\theta (t)=\frac{\theta}{\sqrt{2\pi \sigma _W ^2 t^3}} \exp \left [ - \frac{(\theta - \mu _W t)^2}{2\sigma _W ^2 t} \right ]. Let's discard the meaning of all the variables for now. Then the paper reads that the mean of the interspike interval distribution is T_{\text{ISI}}=\theta / \mu _W.
My question is, does this mean that for the Gerstein-Mandelbrot's model with positive drift, the mean time between 2 spikes is worth "\theta / \mu_W" ?
I guess so, but I would like to be 100% sure. The math is over my head here.
I will give some data, \mu _W is the drift constant. I am puzzled as what is theta. Apparently it is the difference of potential between the threshold and resting potentials but then the units of T_{\text{ISI}} would be volts instead of seconds. This does not make sense.
 
  • #46
1):

I believe The Morris-Lecar neuron has a larger bifurcation set:

http://www.sciencedirect.com/science/article/pii/S0925231205001049

and is therefore capable of a large variety of dynamics. The excitable parameter regime in the Morris-Lecar model consists of three fixed points: a stable point, a saddle-node, and a focus. I usually only see Fitzhugh-Nagumo with one fixed point.

Finally, the Morris-Lecar neuron is modeled after a real experimental neuron (a barnacle muscle fiber) whereas I think Fitzhugh-Nagumo is meant to be the most mathematically reduced generality of an excitable system (based on Hodgkin-Huxley reductions, I believe).
 
  • Like
Likes 1 person
  • #47
Pythagorean said:
1):

I believe The Morris-Lecar neuron has a larger bifurcation set:

http://www.sciencedirect.com/science/article/pii/S0925231205001049

and is therefore capable of a large variety of dynamics. The excitable parameter regime in the Morris-Lecar model consists of three fixed points: a stable point, a saddle-node, and a focus. I usually only see Fitzhugh-Nagumo with one fixed point.

Finally, the Morris-Lecar neuron is modeled after a real experimental neuron (a barnacle muscle fiber) whereas I think Fitzhugh-Nagumo is meant to be the most mathematically reduced generality of an excitable system (based on Hodgkin-Huxley reductions, I believe).

Thank you very much. Extremely helpful information!

Edit: Here I found a 3 fixed point "analysis" on the FitzHugh-Nagumo model: http://icwww.epfl.ch/~gerstner/SPNM/node22.html.
 
  • #48
Yes, because of the cubic nature of the differential equation describing V, you can always intersect the cubic three times with a straight line. But Fitzhugh-Nagumo displays excitability without the three intersection points.

In the physiological parameter regime of the Morris-Lecar model, the three fixed points have some kind of physiological meaning and the system only becomes an oscillator when two of the fixed points collide and annihilate, leaving only the unstable focus behind:

WATER_978-0-387-87708-2_3_Fig5_HTML.jpg


So on the left, you see the excitable regime, on the right you see the oscillatory regime. This correlates well with the effect of persistent currents and constant stimuli in neural systems and it's kind of the intrinsic dynamics of neuron function that I've become familiar with in my experience with the Morris-Lecar model.

Maybe the Fitzhugh Nagumo has meaningful physiological correlates, too. I don't know, I'm not that experienced with the model and always had in mind that it was kind of a toy model.
 
  • #49
I see Pythagorean.
I would like to know your opinion on the Izhikevich's model. From what I've read, it's "very" simple mathematically (a system of 2 DE's where one of them is non-linear but quadratic). It has only 4 parameters (except from the input current) and can describe a plethora of phenomenons experimentally observed.
 
Back
Top