Neuroscience Neural impulse, references and ideas

Click For Summary
The discussion centers on seeking updated references regarding neural impulses at the single neuron level, with a focus on mathematical models. The original poster mentions a 1974 book and expresses interest in the firing rate of neurons and models like Morris-Lecar. Participants suggest various resources, including free online materials and notable textbooks, emphasizing the significance of the Hodgkin-Huxley equations in modeling. The conversation also delves into the mathematical representation of neuronal potential, discussing the implications of parameters like tau and the role of noise in firing dynamics. Overall, the exchange highlights the complexity of neuronal modeling and the importance of current literature in understanding these processes.
  • #31
atyy said:
If you set m=0 in Eq 9, I think you get the one dimensional heat equation
http://mathworld.wolfram.com/HeatConductionEquation.html

Yes you are right. The problem I'm having is with the boundary conditions; I don't really know what they are. The article you gave me (Gerstein-Mandelbrot) seems to deal with the heat equation at page 52 for when their c is worth 0 (Goel's m if I understood well).
However in Goel, the diffusion equation (eq.9) of page 192 becomes the heat equation indeed and both the eq. 6a and 6b (backward diffusion equation, the Kolmogorov one) also becomes the heat equation.
But P is a function of x, y and t. The 2 PDE's are then ##\frac{\partial P}{\partial t} = \frac{\sigma ^2}{2} \frac{\partial ^2 P}{\partial x^2}## and ##\frac{\partial P}{\partial t} = \frac{\sigma ^2}{2} \frac{\partial ^2 P}{\partial y^2}##. Subject to the boundary conditions of page 192 (8a, 8b and 8c, yet I have still doubts that Goel wrote well the 8b one).
I wanted to kind of "cheat", look up the solution in Goel for the general case m not necessarily 0 and set up m=0 in his solution. However, the resulting solution does not satisfy the diffusion equation.
Here is eq. 10 of page 193 with m=0. ##P(x,y,t)=\frac{1}{\sqrt{2\pi}\sigma} \left [ \exp \{ -\frac{(x-y)^2}{2\sigma ^2 t} \} - \exp \{ - \frac{(x+y-2B)^2}{2\sigma ^2 t} \} \right ]## but as I said, I tried to see if it solves the heat equation and it does not.
I tried with the solution ##\frac{1}{\sqrt{2\pi t}\sigma} \left [ \exp \{ -\frac{(x-y)^2}{2\sigma ^2 t} \} - \exp \{ - \frac{(x+y-2B)^2}{2\sigma ^2 t} \} \right ]## but it also fails to solve the heat equation. That's why I'm starting to believe that the solution Goel gives for the general case of m doesn't even work, because it fails for at least m=0.

In the article of Gerstein-Mandelbrot, it gives a solution of ##I(z_0,\tau)=(4\pi )^{-1/2}z_0 \tau ^{-3/2}\exp \{ -\frac{z_0}{4\tau} \}## where I believe G-M's tau is equivalent to Goel's t and G-M's ##z_0## is equivalent to Goel's x-y or something like that, I am not really sure (but the threshold potential B must appear somewhere... maybe in ##z_0##?).

P.S.:Also notice that apparently for the 2 PDE's, the second derivative of P with respect to x is equal to the second derivative of P with respect to y; at least if Goel's solution works (which does not seem to do, but maybe the real solution has this property). So one could just add both equation to fall over a single PDE. Guess what this PDE is? The heat equation for either x or y. That is, ##\frac{\sigma ^2}{2} \frac{\partial P ^2}{\partial x^2} = \frac{\partial P}{\partial t}##. (I have started a thread on this topic at https://www.physicsforums.com/showthread.php?p=4461916#post4461916).
1 more comment: by looking at either G-M's solution or Goel, it doesn't seem like the solution is separable. So separation of variables might not be the way to go. That may due to the weird boundary conditions, I'm not really sure.
 
Biology news on Phys.org
  • #32
At the bottom of G-M's p49 they say that their ##z_{o}## is the distance between the resting potential and threshold. In their case, the resting potential is the potential immediately after a spike, because they say at the bottom of p48 "6. Immediately after the state point has attained the threshold and caused the production of an action potential, it returns to the resting potential, only to begin again on its random walk."
 
  • #33
atyy said:
At the bottom of G-M's p49 they say that their ##z_{o}## is the distance between the resting potential and threshold. In their case, the resting potential is the potential immediately after a spike, because they say at the bottom of p48 "6. Immediately after the state point has attained the threshold and caused the production of an action potential, it returns to the resting potential, only to begin again on its random walk."

I see. If I understand well, ##z_0## corresponds to B-m which is in my case worth B. I don't really see how to obtain P(x,y,t) from there...

EDIT: Nevermind! The answer given in Goel works (with the 1/ sqrt t typo fixed)! I had to redo the algebra like 3 times and I've checked with the program Maxima that all is correct... phew... Hurray.
 
Last edited:
  • #34
Nice! I'll have to try Maxima some time.
 
  • #35
atyy said:
Nice! I'll have to try Maxima some time.

I've been induced in error because of the notation of Maxima, I might post a screenshot later if I have the time, to show you. Other than that, it seems pretty nice.

By the way I've been reading a bit the book of "Spikes, decisions and actions" by H.R. Wilson. At page 1 it's written that there's around 10^12 neurons and 10^15 synapses in the human brain. However I thought and most other sources state that there are "only" 100 billions neurons in the whole nervous system, so 10^11 neurons. They also say that there are around 10^14 to 10^15 synapses. So who's right?
If I understand well this mean that there are about 1000 to 10000 synapses per neuron in average? So, if I'm still right, more than 1000 to 10000 dentrites per neuron in average?
 
  • #36
Neuron counts are estimates. I usually hear 100 billion. It's probably give or take an order of magnitude anyway...

There can be multiple synapses on a single dendritic process.
 
  • Like
Likes 1 person
  • #37
Pythagorean said:
Neuron counts are estimates. I usually hear 100 billion. It's probably give or take an order of magnitude anyway...

There can be multiple synapses on a single dendritic process.

I see, thanks for the information.
 
  • #39
Makes you wonder if the things we consider intelligent only belong to a limited region of brain.

I wonder how that patient would deal with outrunning tigers, finding water, and hunting mammoth.
 
  • #40
Hello guys,
I'm retaking the "work" I started months ago, my goal is to finish it within the next seven days more or less.
I am reading about the Integrate-and-fire model and it is not clear to me whether Lapicque's (I guess he's the first who "invented" this model?) paper was a leaky IAF model or simply an IAF model.
Some references seem to claim that the model had a capacitor with a resistor but that the model was not a leaky one. I do not understand how a model can have a resistor and not be a leaky model.
N.Brunel and M.C.W. van Rossum said:
Lapicque starts his paper by arguing that nerve membranes
are nothing but semipermeable, polarizable membranes.
Polarizable membranes can in first approximation be mod-
eled as a capacitor with leak. The paper then compares his
data to both an RC model of the membrane and a heuristic
law of excitability obtained by Weiss (Irnich 2002).
but later in the same paper one reads
Richard Stein introduced in 1965 a leaky integrate-and-fire model with random Poisson excitatory and inhibitory inputs
(Stein 1965).
Another reference claims:
A.N.Burkitt said:
Lapicque (1907) put forward a model of the neuron mem-
brane potential in terms of an electric circuit consisting of
a resistor and capacitor in parallel, representing the leakage
and capacitance of the membrane.
so it seems what we would call a leaky Integrate-and-fire model, or I'm missing something?

All in all, I do not know how to introduce the Integrate-and-fire model in my document. Is it just a parallel RC circuit or something like wikipedia claims to be, i.e.
Wiki the Great said:
One of the earliest models of a neuron was first investigated in 1907 by Louis Lapicque.[1] A neuron is represented in time by I(t)=C_m \frac{dV_m (t)}{dt}
where there is no mention of any resistor.
 
  • #41
I have re-read my work and in post 5 I think I made a worrying mistake. I claimed that the DE \frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t) became \frac{dV}{dt}=-\frac{V}{\tau} +m when the white noise is removed, but I think this is false. Since m is the mean of the white noise, if I remove it, m should be worth 0.
So that the DE becomes \frac{dV}{dt}=-\frac{V}{\tau} and the solution is simply V(t)=Ke^{-t/\tau}, a decaying to 0 potential function.
 
  • #42
Hello guys, I have another doubt/question. I would like to know whether in the Integrate-and-fire model, if one has a periodic input function, do one get a periodic somatic potential? It is a doubt because I think that the answer is yes but I haven't seen any demonstration so far.

Just to put water in the bath, here is the DE: a\frac{dV}{dt}=-V+RI(t) and we consider I(t) as a periodic function. Does this imply that v will be periodic?

I am not posting this into the DE forum because I think that one of you may know the answer since it may be a very trivial result in neuronal models.

P.S.:a is a constant, V is the somatic potential, R is the resistor equivalent of the neuron and t is time of course.
 
  • #43
Technically, yes, but for applications purposes, it depends on "a" and R and the frequency of periodic function I(t). In one regime, the system's intrinsic behavior dominates (a transient to the steady state with little oscillations around it). In the other, the oscillations dominate.

Basically, because the membrane is a capacitor, it's function in an oscillating circuit depends on the capacitive reactance, which is proportional to R/(w*C), where R is the resistance, w is the frequency of your oscillations and C is the capacitance. In your system, it's R/(w*a). That "a" basically sets the time scale of the intrinsic behavior (which is just a linear V).
 
  • Like
Likes 1 person
  • #44
Pythagorean said:
Technically, yes, but for applications purposes, it depends on "a" and R and the frequency of periodic function I(t). In one regime, the system's intrinsic behavior dominates (a transient to the steady state with little oscillations around it). In the other, the oscillations dominate.

Basically, because the membrane is a capacitor, it's function in an oscillating circuit depends on the capacitive reactance, which is proportional to R/(w*C), where R is the resistance, w is the frequency of your oscillations and C is the capacitance. In your system, it's R/(w*a). That "a" basically sets the time scale of the intrinsic behavior (which is just a linear V).

Thank you, that makes sense.
 
  • #45
I've got other questions.
1)What are the main difference(s) between Morris-Lecar and FitzHugh-Nagumo models? They both seem 2 dimensional simplifications of the 4 dimensional Hodgkin-Huxley model. Do their goal differ and in what exactly/more or less?
2)As I understand it, the Gerstein-Mandelbrot's model is just a special case of the Integrate-and-fire model? It appears when the synaptic input signal is considered as a white noise? Does my understanding looks correct?

3)One reads in a Burkit's paper (A review of the integrate-and-fire neuron model: I. Homogeneous
synaptic input), that, if I understand well, for the Gerstein-Mandelbrot's model with positive drift (this information is not mentioned but this is the only case where it makes sense I believe), the density of the first passage time is f _\theta (t)=\frac{\theta}{\sqrt{2\pi \sigma _W ^2 t^3}} \exp \left [ - \frac{(\theta - \mu _W t)^2}{2\sigma _W ^2 t} \right ]. Let's discard the meaning of all the variables for now. Then the paper reads that the mean of the interspike interval distribution is T_{\text{ISI}}=\theta / \mu _W.
My question is, does this mean that for the Gerstein-Mandelbrot's model with positive drift, the mean time between 2 spikes is worth "\theta / \mu_W" ?
I guess so, but I would like to be 100% sure. The math is over my head here.
I will give some data, \mu _W is the drift constant. I am puzzled as what is theta. Apparently it is the difference of potential between the threshold and resting potentials but then the units of T_{\text{ISI}} would be volts instead of seconds. This does not make sense.
 
  • #46
1):

I believe The Morris-Lecar neuron has a larger bifurcation set:

http://www.sciencedirect.com/science/article/pii/S0925231205001049

and is therefore capable of a large variety of dynamics. The excitable parameter regime in the Morris-Lecar model consists of three fixed points: a stable point, a saddle-node, and a focus. I usually only see Fitzhugh-Nagumo with one fixed point.

Finally, the Morris-Lecar neuron is modeled after a real experimental neuron (a barnacle muscle fiber) whereas I think Fitzhugh-Nagumo is meant to be the most mathematically reduced generality of an excitable system (based on Hodgkin-Huxley reductions, I believe).
 
  • Like
Likes 1 person
  • #47
Pythagorean said:
1):

I believe The Morris-Lecar neuron has a larger bifurcation set:

http://www.sciencedirect.com/science/article/pii/S0925231205001049

and is therefore capable of a large variety of dynamics. The excitable parameter regime in the Morris-Lecar model consists of three fixed points: a stable point, a saddle-node, and a focus. I usually only see Fitzhugh-Nagumo with one fixed point.

Finally, the Morris-Lecar neuron is modeled after a real experimental neuron (a barnacle muscle fiber) whereas I think Fitzhugh-Nagumo is meant to be the most mathematically reduced generality of an excitable system (based on Hodgkin-Huxley reductions, I believe).

Thank you very much. Extremely helpful information!

Edit: Here I found a 3 fixed point "analysis" on the FitzHugh-Nagumo model: http://icwww.epfl.ch/~gerstner/SPNM/node22.html.
 
  • #48
Yes, because of the cubic nature of the differential equation describing V, you can always intersect the cubic three times with a straight line. But Fitzhugh-Nagumo displays excitability without the three intersection points.

In the physiological parameter regime of the Morris-Lecar model, the three fixed points have some kind of physiological meaning and the system only becomes an oscillator when two of the fixed points collide and annihilate, leaving only the unstable focus behind:

WATER_978-0-387-87708-2_3_Fig5_HTML.jpg


So on the left, you see the excitable regime, on the right you see the oscillatory regime. This correlates well with the effect of persistent currents and constant stimuli in neural systems and it's kind of the intrinsic dynamics of neuron function that I've become familiar with in my experience with the Morris-Lecar model.

Maybe the Fitzhugh Nagumo has meaningful physiological correlates, too. I don't know, I'm not that experienced with the model and always had in mind that it was kind of a toy model.
 
  • #49
I see Pythagorean.
I would like to know your opinion on the Izhikevich's model. From what I've read, it's "very" simple mathematically (a system of 2 DE's where one of them is non-linear but quadratic). It has only 4 parameters (except from the input current) and can describe a plethora of phenomenons experimentally observed.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 7 ·
Replies
7
Views
5K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 2 ·
Replies
2
Views
6K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 15 ·
Replies
15
Views
6K
  • · Replies 99 ·
4
Replies
99
Views
14K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 27 ·
Replies
27
Views
2K