Neuroscience Neural impulse, references and ideas

In summary: The notation is a bit different from what I'm used to, so here's my guess. For a simple model I usually write CdV/dt = GR(ER-V) + GS(t)(ES-V). This is a model with...A model with feedback.
  • #1
fluidistic
Gold Member
3,923
260
Hello people,
I would like some references, be it book or papers about the process of neural impulses at a single neuron level.
My only reference so far is a book from 1974 (Stochastic models in biology) and so is probably outdated despite having around 30 pages on the subject.
The more mathematics there is, the better for me.
Thank you.
 
Biology news on Phys.org
  • #2
Have you looked through some current neuroscience textbooks? There are some cheap used ones on amazon.
 
  • #3
Greg Bernhardt said:
Have you looked through some current neuroscience textbooks? There are some cheap used ones on amazon.

No I haven't, yet. Thanks for the suggestion.
I believe I'm interested in the fire rate of a single neuron for different models (I guess they all include white noise?).
I've read a very bit about the Morris-Lecar model, which seems much more complicated than the model in the book I am using (which is written by Goel by the way). I don't know the name of the model I'm dealing with. It assumes that the voltage of the soma is of the form ##\frac{dV}{dt}=-\frac{V}{\tau}+i(t)## where i(t) is a function that is due to the effect of other neurons on the soma of a particular neuron.
I'm basically at loss.
 
  • #4
1974 is probably fine, they understood it well then.

http://www.ncbi.nlm.nih.gov/books/NBK10799/ is free and has the basic ideas, but I think not mathematically.

http://icwww.epfl.ch/~gerstner/SPNM/node12.html is free and mathematical. The key equations are the Hodgkin-Huxley equations.

Thomas Weiss's "Cellular Biophysics", Christof Koch's "Biophysics of Computation" or Johnston and Wu's "Foundations of Cellular Neurophysiology" are very good, but not free.

The advances are in either simplified models, or more details about different channels. However, the Hodgkin-Huxley equations are still the basics for most modelling (unless you go to the single channel level, in which case the phenomenological variables in the Hodgkin-Huxley equations are not easily related to stuff you can measure).

Try http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3375691/ or http://www.jneurosci.org/content/31/24/8936.long to see current usage.
 
  • Like
Likes 1 person
  • #5
Ok thank you very much atyy, I'm going to have a close look as soon as I can, on all these ressources. It's very nice to know that my book then is not that outdated on the subject.
Meanwhile, I have some questions and doubts. In my book it basically states that the potential of the soma has the form ##\frac{dV}{dt}=h(V)+e(V)i(t)## where i(t) is an input signal, e(V) describes the effect of the input signal (from what I understood in wikipedia, this function would be the synaptic weight?) and h(V) describes the decay of the potential when there's no input signal.
Has this method a name?

Then the book made some simplifications, like that the mean value of the function i(t) would be worth m, h(V) would take the form ##-V/\tau## where tau is the rate of decay constant. Also it assumed that the change in the potential due to the arriving signal is indepent on the current value of the potential and is proportional to the input.
Then the potential takes the form ##\frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t)## where F(t) is a white noise function (worth exactly ##\frac{[i(t)-m]}{\sigma}##) with mean 0 and both m and sigma are positive constants. I've solved the equation when the noise is worth 0 and it's an exponential decreasing function (##V(t)=Ae^{-t/\tau}+m\tau##). So if I understand well, this V(t) describes the potential of the soma right after having fired? It has a high initial value and then exponentially decreases toward the mean value of the white noise multiplied by tau.
Later the book states that m>0 is more realisitic than m<0, I can understanding that. But it also states that in the limit when tau tends to infinity (not realistic), this is equivalent to the case of when the time taken for the potential to reach its resting value (m times tau I guess) is much slower than the time between 2 firings.
So if I understand well, a huge value for tau would mean that the neuron fires extremely fast?
How unrealistic is this? Because this makes the math slightly simpler if I take tau that tends to infinity (but still drastically complicated), for a stochastic analysis.
 
  • #6
fluidistic said:
Ok thank you very much atyy, I'm going to have a close look as soon as I can, on all these ressources. It's very nice to know that my book then is not that outdated on the subject.
Meanwhile, I have some questions and doubts. In my book it basically states that the potential of the soma has the form ##\frac{dV}{dt}=h(V)+e(V)i(t)## where i(t) is an input signal, e(V) describes the effect of the input signal (from what I understood in wikipedia, this function would be the synaptic weight?) and h(V) describes the decay of the potential when there's no input signal.
Has this method a name?

The notation is a bit different from what I'm used to, so here's my guess.

For a simple model I usually write CdV/dt = GR(ER-V) + GS(t)(ES-V). This is a model with no voltage dependent conductances, so no spikes, just passive membrane receiving synaptic input.

V = membrane potential
t = time
C = membrane capacitance
GR = resting membrane conductance
ER = resting membrane potential
GS = synaptic conductance
ES = synaptic reversal potential
ES-V is often called the "synaptic driving force"

If I rearrange I get dV/dt = (1/C)[-GR.V + GR.ER +GS(t)(ES-V)]

If I conpare with the equation in your book, I get

h(V) = -GR.V/C
m = GR.ER/C
i(t) = GS(t)/C
e(V) = (ES-V)/C

So i(t) would be the synaptic conductance and e(V) would be the the synaptic driving force. (divided by the membrane capacitance).

fluidistic said:
Then the book made some simplifications, like that the mean value of the function i(t) would be worth m, h(V) would take the form ##-V/\tau## where tau is the rate of decay constant. Also it assumed that the change in the potential due to the arriving signal is indepent on the current value of the potential and is proportional to the input.
Then the potential takes the form ##\frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t)## where F(t) is a white noise function (worth exactly ##\frac{[i(t)-m]}{\sigma}##) with mean 0 and both m and sigma are positive constants. I've solved the equation when the noise is worth 0 and it's an exponential decreasing function (##V(t)=Ae^{-t/\tau}+m\tau##). So if I understand well, this V(t) describes the potential of the soma right after having fired? It has a high initial value and then exponentially decreases toward the mean value of the white noise multiplied by tau.
Later the book states that m>0 is more realisitic than m<0, I can understanding that. But it also states that in the limit when tau tends to infinity (not realistic), this is equivalent to the case of when the time taken for the potential to reach its resting value (m times tau I guess) is much slower than the time between 2 firings.
So if I understand well, a huge value for tau would mean that the neuron fires extremely fast?
How unrealistic is this? Because this makes the math slightly simpler if I take tau that tends to infinity (but still drastically complicated), for a stochastic analysis.

With these simplifications, the model seems to be just the same as the simple model I wrote above, so it would have no action potentials, and be just a passive membrane. The solution you wrote is just passive decay to resting membrane potential from an initial condtion in which the membrane had been perturbed from from rest.
 
  • #7
Ok I see, thank you. I start to understand a bit better the book. This potential is the one right after a neuron fired and is valid only between 2 firings if I understand well.
What is not clear to me is that right after the equation ##\frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t)##, it says that the somatic potential having the value x at time t knowing it had the value y at time t=0 satisfies the Fokker-Planck equation ##\frac{\partial P}{\partial t} = - \frac{\partial }{\partial x} \left [ \left ( m+ \frac{x}{\tau}\right ) P \right ] + \frac{\sigma ^2}{2} \frac{\partial ^2 P}{\partial x^2}##.
I could be mathematically convinced of that, but not by looking at the potential function ##V(t)=Ae^{-t/\tau}+m\tau## which satisfies the noiseless equation. By looking at that function, there's no way there could be another firing, since V(t) decreases toward its resting value for when t tends to infinity.
 
  • #8
fluidistic said:
Ok I see, thank you. I start to understand a bit better the book. This potential is the one right after a neuron fired and is valid only between 2 firings if I understand well.

From what you're telling me, I think this model is called the integrate-and-fire neuron http://lcn.epfl.ch/~gerstner/SPNM/node26.html.

fluidistic said:
What is not clear to me is that right after the equation ##\frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t)##, it says that the somatic potential having the value x at time t knowing it had the value y at time t=0 satisfies the Fokker-Planck equation ##\frac{\partial P}{\partial t} = - \frac{\partial }{\partial x} \left [ \left ( m+ \frac{x}{\tau}\right ) P \right ] + \frac{\sigma ^2}{2} \frac{\partial ^2 P}{\partial x^2}##.
I could be mathematically convinced of that, but not by looking at the potential function ##V(t)=Ae^{-t/\tau}+m\tau## which satisfies the noiseless equation. By looking at that function, there's no way there could be another firing, since V(t) decreases toward its resting value for when t tends to infinity.

No it is not obvious, as it depends on the noise. I don't remember exactly what noise gives that Fokker-Planck equation, but I think what your book has should be similar to http://lcn.epfl.ch/~gerstner/SPNM/node37.html (Eq 5.73 and 5.89)

You can also Google "Langevin equation" and "Diffusion Equation", which I think are mathematically related to your equations, eg. http://dasher.wustl.edu/bio5476/reading/stochastic.pdf (Try the "Smoluchowski Diffusion Equation" on p2).

This is probably the closest: http://alice.nc.huji.ac.il/~netazach/action%20potential/burkitt%202006.pdf (Eq 15, 25, 26)
 
Last edited:
  • #9
It looks like an Ornstein-Uhlenbeck process:
http://en.wikipedia.org/wiki/Ornstein–Uhlenbeck_process

in which the noise is a Wiener process. You can model it numerically. Lemons has the most straightforward treament (maybe you'll get lucky with the correct page on google books :)

http://books.google.ca/books/about/...ic_Processes.html?id=Uw6YDkd_CXcC&redir_esc=y

fluidistic said:
I've read a very bit about the Morris-Lecar model, which seems much more complicated than the model in the book I am using

To me, the Morris-Lecar is at least conceptually easier to understand. I like the description of the neuron's channel populations. Plus, you really need two dimensions to have oscillatory behavior in a deterministic (thus, mechanistic) system, which leads to confusion (as you noted, it looks like the system can never fire again... and it can't as far as true continuous descriptions go. You choose a threshold for it and "artificially" introduce the spike and reset the position to subthreshold.

Standard mathematical analysis of the Morris-Lecar neuron seems like a nightmare though; it's a system you want to understand graphically (and thus numerically) by looking at it's nulclines, fixed points, and typical numerical solutions in different regimes (Fig 3):

http://pegasus.medsci.tokushima-u.ac.jp/~tsumoto/work/nlp_kyutech2003.pdf

If these terms are unfamiliar, there's a book on analyzing these kinds of models by Strogatz called "Nonlinear Dynamics and Chaos". The graphical analysis is in the first couple chapters.

(The Tsumoto paper above has several different papers floating around titled this. I think there's three different sized papers I've found. This is the medium sized one, the smaller one is a symposium, and the I can't seem to find the bigger one that that goes into more detail.)
 
  • #10
Oh yeah, also a good general book that's completely free and out of print (but still appreciated):

Spikes, Decisions & Actions: Dynamical Foundations of Neuroscience

a free electornic copy is available on the author's website:
http://cvr.yorku.ca/webpages/wilson.htm

I've never read it myself, but I've seen modern authors like Bard Ermentrout (Mathematical Foundations of Neuroscience) suggest it. Another book that was fun was Ihzikevich's "Dynamical Systems in Neuroscience". Izhikevich has his own model that he happily claims is one of the most efficient and biologically plausible neurons out there:

http://wiki.transhumani.com/images/b/b8/Cost_of_neuron_models.jpg
(this is from one of his papers)
 
  • #11
atyy said:
No it is not obvious, as it depends on the noise. I don't remember exactly what noise gives that Fokker-Planck equation, but I think what your book has should be similar to http://lcn.epfl.ch/~gerstner/SPNM/node37.html (Eq 5.73 and 5.89)
Well the book shows a derivation for the general case ##\frac{dx}{dt}=\alpha (x)+ \beta (x)F(t)##, the Fokker-Planck or diffusion equation is satisfied: ##\frac{\partial P(x|y,t)}{\partial t}=-\frac{\partial}{\partial x}[a(x)P(x|y,t)]+\frac{1}{2} \frac{\partial ^2 }{\partial x^2}[b(x)P(x|y,t)] ##. The demonstration is rather lengthy...

Thanks for all guys.
I have a sort of monograph (~25 pages) to write for in about 1 week and I didn't even start yet (was impossible for me to start before). It's not an obligation but it would be a plus in my case. Since the subject is arbitrary but must be related to stochastic processes, I thought neuron firing was a good choice; it looked and still looks interesting. I didn't realize it was so complicated; nor that I know of other simpler subjects.
I was thinking, maybe if I could take a very simple neuron method and find an analytical solution when I calculate the probability of a neuron firing at time t knowing that it fired at time t=0. However I doubt I can take a simpler case than the one I'm dealing with (when m>0 and tau is worth infinity), yet it yields either a system of coupled PDE's or a single PDE (I'm not even sure of this, I don't understand the book there). And the solution given in the book for ##P(x|y,t)## has a typo I believe, but since he "solved" the equation by looking at a table, I don't even know how to solve the PDE or the system of PDE's (I don't even know what he solved). I must say I'm a bit discouraged and in lack of ideas.
 
  • #12
fluidistic said:
I was thinking, maybe if I could take a very simple neuron method and find an analytical solution when I calculate the probability of a neuron firing at time t knowing that it fired at time t=0. However I doubt I can take a simpler case than the one I'm dealing with (when m>0 and tau is worth infinity), yet it yields either a system of coupled PDE's or a single PDE (I'm not even sure of this, I don't understand the book there). And the solution given in the book for ##P(x|y,t)## has a typo I believe, but since he "solved" the equation by looking at a table, I don't even know how to solve the PDE or the system of PDE's (I don't even know what he solved).

Sounds good. The calculation of the average time between spikes (the inter-spike interval) in the integrate-and-fire neuron is a classic calculation, so it's nice.

With ##m=0## and ##\tau = \infty##, I think the PDE you wrote in post #7 is the heat equation. Wikipedia gives the solution http://en.wikipedia.org/wiki/Heat_equation.
 
Last edited:
  • #13
atyy said:
Sounds good. The calculation of the average time between spikes (the inter-spike interval) in the integrate-and-fire neuron is a classic calculation, so it's nice.
Oh I see...
atyy said:
With ##m=0## and ##\tau = \infty##, I think the PDE you wrote in post #7 is the heat equation. Wikipedia gives the solution http://en.wikipedia.org/wiki/Heat_equation
Ah you're right. I'd have to see what are the boundary conditions, etc. But m=0 would mean that the other neurons aren't affecting the particular neuron I consider. In other words the mean value of i(t) would be worth 0, that is, the mean value of the input signal would vanish. So this would be less realistic than m>0, right? Albeit more simple to deal with.
 
  • #14
fluidistic said:
Oh I see...

Ah you're right. I'd have to see what are the boundary conditions, etc. But m=0 would mean that the other neurons aren't affecting the particular neuron I consider. In other words the mean value of i(t) would be worth 0, that is, the mean value of the input signal would vanish. So this would be less realistic than m>0, right? Albeit more simple to deal with.

I'm not sure, but I think the neuron can still fire. m=0 means the mean input from other neurons is 0, so the neuron receives equal amounts of inputs from neurons that excite it and from neurons that inhibit it. Although the neuron is receiving zero net input, it is still receiving excitatory input. So if the excitatory and inhibitory inputs don't cancel exactly at all times, ie. the variance is large, maybe the neuron can still spike.

But if that doesn't work out, the closed form solution for the mean inter-spike interval is given in Eq 5.104 of http://icwww.epfl.ch/~gerstner/SPNM/node37.html . I don't think the closed form Eq 5.104 is so crucial, the more important bit of reasoning is why ##<s> = \int sP_{I_{o}}(s|0) \, ds## gives the mean interspike interval.
 
Last edited:
  • #15
If your project allows you to do computational stuff, there's a free and friendly neuron simulator with which you can make integrate-and-fire neurons and inject Ornstein-Uhlenbeck processes etc. http://briansimulator.org/
 
  • #16
Thank you atyy for the numerical simulator. I can't use it for the work I will try to finish on time, but it's good to know about it.
With respect to the case of m=0, you are right I think... And if I'm not wrong, the neuron will fire eventually. It is only for the case m<0 that the probability to fire starts to become less than 1. The book gives a probability to fire for m<0 as ##R(B,y)=\exp \{ -2(B-y)|m|/\sigma ^2 \}## and 1 for m>0. Oddly enough it doesn't say a word for the case m=0, but it's obvious that it's 1, because of the limit of m that tends to 0 from both sides is 1.
So I guess my goal would be to explicitely derive the equations and solve them entirely for the special case m=0 and then show that it's in agreement with the book for when I take the limit of m tending to 0... That's a good idea I believe for the work I'm asked to do (which is nothing serious at all; but they want us to do some math and not copy the book word by word).
I'm going to spend the next days on it.
I'll ask questions in the DE's section, because I have some doubts.
Also I would like to thank Pythagorean once more for the book Spikes, Decisions & Actions. It really seems interesting and nice.
 
  • #17
Also my equation for P(x|y,t) would not be the heat equation I believe, because P is a function of x, y and t. So the part ##\frac{\partial ^2 P }{\partial x^2}## is not the Laplacian which is required in order to be the heat equation. On the top of that, P satisfies another PDE (the Kolmogorov backward equation I think). So I still have a system of 2 PDE's, with 3 boundary conditions.
I've never dealt with that before. I hope it's not that hard to solve.

Edit: Nevermind. When I sum up the 2 PDE's I fall over the heat equation in Cartesian coordinates!
 
Last edited:
  • #18
I looked up Goel. The mean time to first spike becomes infinite for m=0 and tau=0. Maybe one has to keep tau>0 for something reasonable.
 
  • #19
atyy said:
I looked up Goel. The mean time to first spike becomes infinite for m=0 and tau=0. Maybe one has to keep tau>0 for something reasonable.

Oh... Well I took ##\tau = \infty## and m=0. On page 192 he takes ##\tau \to \infty## but doesn't restrict any value for m. On the next page and skipping most of the math he reaches the 2 probabilities I wrote, for m>0 and m<0 but doesn't say anything on the -obvious- case m=0.
 
  • #20
fluidistic said:
Oh... Well I took ##\tau = \infty## and m=0. On page 192 he takes ##\tau \to \infty## but doesn't restrict any value for m. On the next page and skipping most of the math he reaches the 2 probabilities I wrote, for m>0 and m<0 but doesn't say anything on the -obvious- case m=0.

Yes, but in Goel's Eq 16a, if you set m=0 the mean time to spike is infinite.

I think if m=0 and tau=0, then the equation is dx/dt=i(t), where i(t) is Gaussian white noise with zero mean.

So the membrane voltage will essentially be a random walk which could diverge to -∞.

But if tau=1 (for some units), then dx/dt=-x+i(t), so x has a tendency to relax back to zero in the absence of a stimulus, whether it starts from x>0 or x<0.
 
  • #21
atyy said:
Yes, but in Goel's Eq 16a, if you set m=0 the mean time to spike is infinite.
Oh yes, you are right. However the probability to fire would still be 1. I interpret this as that the neuron will take an infinite amount of time and eventually fire, if that makes any sense. When m is negative, the probability to fire is lesser than 1, even after waiting an infinite amount of time.
atyy said:
I think if m=0 and tau=0, then the equation is dx/dt=i(t), where i(t) is Gaussian white noise with zero mean.

So the membrane voltage will essentially be a random walk which could diverge to -∞.

But if tau=1 (for some units), then dx/dt=-x+i(t), so x has a tendency to relax back to zero in the absence of a stimulus, whether it starts from x>0 or x<0.

If tau is worth 0 I don't really see how you obtain the eq. dx/dt=i(t). Originally the eq. is ##\frac{dx}{dt}=-\frac{x}{\tau}+i(t)##. Tau equal to 0 seems problematic to me, as the first term blows out.
 
  • #22
fluidistic said:
Oh yes, you are right. However the probability to fire would still be 1. I interpret this as that the neuron will take an infinite amount of time and eventually fire, if that makes any sense. When m is negative, the probability to fire is lesser than 1, even after waiting an infinite amount of time.

Yes, that makes sense.

fluidistic said:
If tau is worth 0 I don't really see how you obtain the eq. dx/dt=i(t). Originally the eq. is ##\frac{dx}{dt}=-\frac{x}{\tau}+i(t)##. Tau equal to 0 seems problematic to me, as the first term blows out.

Oops, I meant ##\tau=\infty##.
 
  • #23
atyy said:
Yes, that makes sense.
Ok good to know!



Oops, I meant ##\tau=\infty##.
Oh :)

atyy said:
I think if m=0 and ##{\tau=\color{red} \infty}##, then the equation is dx/dt=i(t), where i(t) is Gaussian white noise with zero mean.

So the membrane voltage will essentially be a random walk which could diverge to -∞.
The divergence at -infinity for the voltage is removed with the boundary conditions on P(x|y,t). The book gives ##P(-\infty |y,t)=0## if I don't misunderstand it (page 192).
 
  • #24
Worried about my understanding

I'm worried, I seem to misunderstand something. In post #5 when I solved the equation ##\frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t)## for the case ##F(t)=0## (no white noise), I reached a solution for V(t) that decreases with time.
And as far as I know, for the integrate-and-fire model (and most other models?) the potential should increase with time after it has been reset; until it reaches the threshold voltage and fire again.
So in the model of the book, it seems that the neuron never fires after it has been reset?
 
  • #25
fluidistic said:
I'm worried, I seem to misunderstand something. In post #5 when I solved the equation ##\frac{dV}{dt}=-\frac{V}{\tau} +m +\sigma F(t)## for the case ##F(t)=0## (no white noise), I reached a solution for V(t) that decreases with time.
And as far as I know, for the integrate-and-fire model (and most other models?) the potential should increase with time after it has been reset; until it reaches the threshold voltage and fire again.
So in the model of the book, it seems that the neuron never fires after it has been reset?

Maybe you should have ##mt## instead of ##m\tau## in the second term?

##V(t)=Ae^{-t/\tau}+mt##
 
  • #27
atyy said:
Maybe you should have ##mt## instead of ##m\tau## in the second term?

##V(t)=Ae^{-t/\tau}+mt##
I've recheked my math and even tried wolfram alpha (http://www.wolframalpha.com/input/?i=dV/dt=-V(t)/a+m), apparently I made no mistake there.

atyy said:
BTW, the model for ##\tau = \infty## is the Gerstein-Mandelbrot model of 1964.

Their paper is available at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1367440/pdf/biophysj00646-0045.pdf .

They discuss the ##m=0## (using ##m## from Goel's notation, not theirs) case on p50, just after their Eq 6.
Thank you very, very much for this reference. This is really helpful.
 
  • #28
fluidistic said:
I've recheked my math and even tried wolfram alpha (http://www.wolframalpha.com/input/?i=dV/dt=-V(t)/a+m), apparently I made no mistake there.

So I guess when there is no noise, the solutions with finite ##\tau## and ##\tau=\infty## are different.

With finite ##\tau##, V=m at steady state. So with m less than threshold the neuron fires only because of the added noise which causes the membrane potential to cross the threshold randomly.

With ##\tau=\infty##, dV/dt=m, and the solution is V=mt+C, so even though m is less than threshold, it will eventually reach threshold and fire if m>0.
 
Last edited:
  • #29
atyy said:
So I guess when there is no noise, the solutions with finite ##\tau## and ##\tau=\infty## are different.

With finite ##\tau##, V=m at steady state. So with m less than threshold the neuron fires only because of the added noise which causes the membrane potential to cross the threshold randomly.

With ##\tau=\infty##, dV/dt=m, and the solution is V=mt+C, so even though m is less than threshold, it will eventually reach threshold and fire if m>0.

I see, thank you.

By the way, I checked that at least for m=0, eq. 10 does not solve eq. 9 (pages 192-193) as claimed. I'm having a hard time in finding the solution to eq.9 for when m=0. I doubt it is even true for ##m\neq 0##. Also I think there's a typo in eq.11, there's a missing factor of ##1/\sqrt t## if the table 3.4 at page 52 is right.
 
  • #30
fluidistic said:
I see, thank you.

By the way, I checked that at least for m=0, eq. 10 does not solve eq. 9 (pages 192-193) as claimed. I'm having a hard time in finding the solution to eq.9 for when m=0. I doubt it is even true for ##m\neq 0##. Also I think there's a typo in eq.11, there's a missing factor of ##1/\sqrt t## if the table 3.4 at page 52 is right.

If you set m=0 in Eq 9, I think you get the one dimensional heat equation
http://mathworld.wolfram.com/HeatConductionEquation.html
 
  • #31
atyy said:
If you set m=0 in Eq 9, I think you get the one dimensional heat equation
http://mathworld.wolfram.com/HeatConductionEquation.html

Yes you are right. The problem I'm having is with the boundary conditions; I don't really know what they are. The article you gave me (Gerstein-Mandelbrot) seems to deal with the heat equation at page 52 for when their c is worth 0 (Goel's m if I understood well).
However in Goel, the diffusion equation (eq.9) of page 192 becomes the heat equation indeed and both the eq. 6a and 6b (backward diffusion equation, the Kolmogorov one) also becomes the heat equation.
But P is a function of x, y and t. The 2 PDE's are then ##\frac{\partial P}{\partial t} = \frac{\sigma ^2}{2} \frac{\partial ^2 P}{\partial x^2}## and ##\frac{\partial P}{\partial t} = \frac{\sigma ^2}{2} \frac{\partial ^2 P}{\partial y^2}##. Subject to the boundary conditions of page 192 (8a, 8b and 8c, yet I have still doubts that Goel wrote well the 8b one).
I wanted to kind of "cheat", look up the solution in Goel for the general case m not necessarily 0 and set up m=0 in his solution. However, the resulting solution does not satisfy the diffusion equation.
Here is eq. 10 of page 193 with m=0. ##P(x,y,t)=\frac{1}{\sqrt{2\pi}\sigma} \left [ \exp \{ -\frac{(x-y)^2}{2\sigma ^2 t} \} - \exp \{ - \frac{(x+y-2B)^2}{2\sigma ^2 t} \} \right ]## but as I said, I tried to see if it solves the heat equation and it does not.
I tried with the solution ##\frac{1}{\sqrt{2\pi t}\sigma} \left [ \exp \{ -\frac{(x-y)^2}{2\sigma ^2 t} \} - \exp \{ - \frac{(x+y-2B)^2}{2\sigma ^2 t} \} \right ]## but it also fails to solve the heat equation. That's why I'm starting to believe that the solution Goel gives for the general case of m doesn't even work, because it fails for at least m=0.

In the article of Gerstein-Mandelbrot, it gives a solution of ##I(z_0,\tau)=(4\pi )^{-1/2}z_0 \tau ^{-3/2}\exp \{ -\frac{z_0}{4\tau} \}## where I believe G-M's tau is equivalent to Goel's t and G-M's ##z_0## is equivalent to Goel's x-y or something like that, I am not really sure (but the threshold potential B must appear somewhere... maybe in ##z_0##?).

P.S.:Also notice that apparently for the 2 PDE's, the second derivative of P with respect to x is equal to the second derivative of P with respect to y; at least if Goel's solution works (which does not seem to do, but maybe the real solution has this property). So one could just add both equation to fall over a single PDE. Guess what this PDE is? The heat equation for either x or y. That is, ##\frac{\sigma ^2}{2} \frac{\partial P ^2}{\partial x^2} = \frac{\partial P}{\partial t}##. (I have started a thread on this topic at https://www.physicsforums.com/showthread.php?p=4461916#post4461916).
1 more comment: by looking at either G-M's solution or Goel, it doesn't seem like the solution is separable. So separation of variables might not be the way to go. That may due to the weird boundary conditions, I'm not really sure.
 
  • #32
At the bottom of G-M's p49 they say that their ##z_{o}## is the distance between the resting potential and threshold. In their case, the resting potential is the potential immediately after a spike, because they say at the bottom of p48 "6. Immediately after the state point has attained the threshold and caused the production of an action potential, it returns to the resting potential, only to begin again on its random walk."
 
  • #33
atyy said:
At the bottom of G-M's p49 they say that their ##z_{o}## is the distance between the resting potential and threshold. In their case, the resting potential is the potential immediately after a spike, because they say at the bottom of p48 "6. Immediately after the state point has attained the threshold and caused the production of an action potential, it returns to the resting potential, only to begin again on its random walk."

I see. If I understand well, ##z_0## corresponds to B-m which is in my case worth B. I don't really see how to obtain P(x,y,t) from there...

EDIT: Nevermind! The answer given in Goel works (with the 1/ sqrt t typo fixed)! I had to redo the algebra like 3 times and I've checked with the program Maxima that all is correct... phew... Hurray.
 
Last edited:
  • #34
Nice! I'll have to try Maxima some time.
 
  • #35
atyy said:
Nice! I'll have to try Maxima some time.

I've been induced in error because of the notation of Maxima, I might post a screenshot later if I have the time, to show you. Other than that, it seems pretty nice.

By the way I've been reading a bit the book of "Spikes, decisions and actions" by H.R. Wilson. At page 1 it's written that there's around 10^12 neurons and 10^15 synapses in the human brain. However I thought and most other sources state that there are "only" 100 billions neurons in the whole nervous system, so 10^11 neurons. They also say that there are around 10^14 to 10^15 synapses. So who's right?
If I understand well this mean that there are about 1000 to 10000 synapses per neuron in average? So, if I'm still right, more than 1000 to 10000 dentrites per neuron in average?
 
<h2>1. What is a neural impulse?</h2><p>A neural impulse, also known as an action potential, is an electrical signal that travels along a neuron. It is initiated when the neuron receives a strong enough stimulus, causing a change in the neuron's membrane potential. This change in potential triggers a cascade of events that results in the transmission of the impulse along the axon of the neuron.</p><h2>2. How does the brain communicate through neural impulses?</h2><p>The brain communicates through neural impulses by using specialized cells called neurons. These neurons are connected to one another through synapses, which are small gaps between neurons. When a neuron receives a signal, it releases neurotransmitters into the synapse, which then bind to receptors on the next neuron, causing an electrical impulse to be generated and transmitted.</p><h2>3. What are some common references in the field of neuroscience?</h2><p>Some common references in the field of neuroscience include research articles published in peer-reviewed journals, textbooks, and scientific conferences. There are also many online resources, such as databases and websites, that provide information and data related to neuroscience research.</p><h2>4. How do neuroscientists generate new ideas for research?</h2><p>Neuroscientists generate new ideas for research through a variety of methods. These may include reviewing current literature and identifying gaps in knowledge, conducting experiments and analyzing data, collaborating with other researchers, attending conferences and seminars, and brainstorming with colleagues. Many also draw inspiration from their own personal experiences and observations in daily life.</p><h2>5. How does neuroscience research impact our understanding of the brain and behavior?</h2><p>Neuroscience research has greatly impacted our understanding of the brain and behavior. Through studying the structure and function of the brain, neuroscientists have been able to identify the neural basis of various behaviors and cognitive processes. This has led to advancements in diagnosing and treating neurological and psychiatric disorders, as well as improving our overall understanding of the brain and its complexities.</p>

1. What is a neural impulse?

A neural impulse, also known as an action potential, is an electrical signal that travels along a neuron. It is initiated when the neuron receives a strong enough stimulus, causing a change in the neuron's membrane potential. This change in potential triggers a cascade of events that results in the transmission of the impulse along the axon of the neuron.

2. How does the brain communicate through neural impulses?

The brain communicates through neural impulses by using specialized cells called neurons. These neurons are connected to one another through synapses, which are small gaps between neurons. When a neuron receives a signal, it releases neurotransmitters into the synapse, which then bind to receptors on the next neuron, causing an electrical impulse to be generated and transmitted.

3. What are some common references in the field of neuroscience?

Some common references in the field of neuroscience include research articles published in peer-reviewed journals, textbooks, and scientific conferences. There are also many online resources, such as databases and websites, that provide information and data related to neuroscience research.

4. How do neuroscientists generate new ideas for research?

Neuroscientists generate new ideas for research through a variety of methods. These may include reviewing current literature and identifying gaps in knowledge, conducting experiments and analyzing data, collaborating with other researchers, attending conferences and seminars, and brainstorming with colleagues. Many also draw inspiration from their own personal experiences and observations in daily life.

5. How does neuroscience research impact our understanding of the brain and behavior?

Neuroscience research has greatly impacted our understanding of the brain and behavior. Through studying the structure and function of the brain, neuroscientists have been able to identify the neural basis of various behaviors and cognitive processes. This has led to advancements in diagnosing and treating neurological and psychiatric disorders, as well as improving our overall understanding of the brain and its complexities.

Similar threads

Replies
4
Views
665
  • Biology and Medical
Replies
7
Views
4K
  • Programming and Computer Science
Replies
5
Views
877
  • Topology and Analysis
Replies
2
Views
3K
  • Biology and Medical
Replies
2
Views
6K
Replies
15
Views
6K
  • Programming and Computer Science
Replies
1
Views
624
Replies
99
Views
11K
Replies
1
Views
1K
Replies
1
Views
829
Back
Top