Key papers in computational neuroscience

In summary: J Neurosci. 1996 Jul 15;16(14):5265-73.In summary, the email contains a list of key papers in computational neuroscience.
  • #1
The following email was sent out via the computational neuroscience mailing list [Comp-neuro]. Since Comp-neuro is a public list I think it is fine to repost this here (I am not the author). Perhaps someone will find this useful...



This is a collection of references obtained in response to a request for key papers from the computational neuroscience community. I have excluded self-citations (but many of those excluded papers actually appear in my own list of key papers below). I have removed the names of respondents, but have left their comments in, as these can be very useful.
Many thanks to all those who contributed to this wide-ranging collection.
Jim Stone, 18th July 2008.
JV Stone's key papers:
SB Laughlin. A simple coding procedure enhances a neuron's informationcapacity. Z Naturforsch, 36c:910{912, 1981.See other papers by Laughlin which cover similar material.
Lettvin, J.Y., Maturana, H.R., McCulloch, W.S., and Pitts, W.H., What the Frog¼s Eye Tells the Frog's Brain, Proc. Inst. Radio Engr. 47:1940-1951, 1959.
Ballard, DH, Cortical connections and parallel processing: Structure and function, in Vision, in Brain and cooperative computation, pp 563-621, 1987, Arbib, MA and Hanson AR (Eds).
Y Weiss, EP Simoncelli, and EH Adelson. Motion illusions as optimal percepts. Nature Neuroscience, 5(6):598 604, 2002.
BA Olshausen and DJ Field. Sparse coding of sensory inputs. Current Opinion in Neurobiology, 14:481 487, 2004.
T Poggio, V Torre, and C Koch. Computational vision and regularization theory. Nature, 317:314 319, 1985.
AA Stocker and EP Simoncelli. Noise characteristics and prior expectations in human visual speed perception. Nature Neuroscience, 9(4):578 585, 2006.
Marr, D., and T. Poggio. <>Cooperative Computation of Stereo Disparity, Science, 194, 283-287, 1976.
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. Learning representations by back-propagating errors. Nature, 323, 533--536.
Hinton, G. E. and Nowlan, S. J. How learning can guide evolution. Complex Systems, 1, 495--502.
Hinton, G. E. and Plaut, D. C. Using fast weights to deblur old memories. Proceedings of the Ninth Annual Conference of the Cognitive Science Society, Seattle, WA
Becker, S. and Hinton, G. E. A self-organizing neural network that discovers surfaces in random-dot stereograms. Nature, 355:6356, 161-163
Ackley, D. H., Hinton, G. E., and Sejnowski, T. J. A learning algorithm for Boltzmann machines. Cognitive Science, 9, 147-169.
@article{DURBIN_WILLSHAW_TSP, author ="Durbin, R and Willshaw, D", title = "An analogue approach to the traveling salesman problem using an elastic net method", journal = "Nature", volume = "326", number = "6114", pages = "689-691", month = "", year = "1987" }
@article{DOUGLAS_CANONICAL_89, author = "Douglas, RJ and Martin, KAC and Whitteridge, D", title = "A Canonical Microcircuit for Neocortex", journal = "Neural Computation", volume = "1", number = "", pages = "480-488", month = "", year = "1989" }
@article{SWINDALE82, author = "Swindale, NV", title = "A model for the formation of orientation columns", journal = "Proceedings Royal Society London B", volume = "215", number = "", pages = "211-230", month = "", year = "1982" }

Zohary, E, Shadlen, MN and Newsome, WT (1994). Correlated neuronal discharge rate and its implications for psychophysical performance. Nature 370:140-143.
Hopfield's papers (see below).
Hodgkin and Huxley 1952d (the modeling paper)
Song and Abbott: Cortical development and remapping through spike timing-dependent plasticity.Neuron 32:339-50, 2001
Buonomano and Merzevich: Temporal information transformed into a spatial code by a neural network with realistic properties.Science. 1995 Feb 17;267(5200):1028-30.
Wilson HR, Cowan JD.Excitatory and inhibitory interactions in localized populations of modelneurons. Biophys J. 1972 Jan;12(1):1-24.
H.B. Barlow, The mechanical mind.Ann. Rev. Neurosci. 13 15-24 (1990)It is about a simple model of consciousness.

From the cognitive side of computational neuroscience and I recommend:
Pouget A, Deneve S, Duhamel JR (2002) A computational perspective on the neural basis of multisensory spatial representations. Nat Rev Neurosci. 3: 741-747.
Hamker, F.H., Zirnsak, M., Calow, D., Lappe, M. (2008)ÝThe peri-saccadic perception of objects and space.ÝPLOS Computational Biology 4(2):e21
Olshausen BA, Field DJ. 1996. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381:607-9.
I was really influenced by
@article{Atick92, Author = {Atick, Joseph J.}, Journal = {Network: {C}omputation in {N}eural {S}ystems}, Number = {2}, Pages = {213--52}, Title = {Could {I}nformation {T}heory {P}rovide an {E}cological {T}heory of {S}ensory {P}rocessing?}, Volume = {3}, Year = {1992}}
which is a review paper rather related to the seminal papers from Barlow and Marr.
Wilson HR, Cowan JD.Excitatory and inhibitory interactions in localized populations of modelneurons.Biophys J. 1972 Jan;12(1):1-24.
Wiring Optimization Dmitri B. ChklovskiiTraub's CA1 model/Pinsky Rinzel 2 compartmental modelsErik De Schutter's Purkinje cell modelsHenry Markram's Cortical ModelsRolls & Treves - Hippocampal NetworkPolsky & Mel - 2layer pyramidal cell modelTerry Sejnowski - Synapse, modeldbUpinder S Bhalla - Million Synapses / Bistable systems
These papers introduced accurate models of calcium dynamics and neuromodulatory effects on ion channel activity.
Bhalla US, Iyengar R.Emergent properties of networks of biological signaling pathways.Science. 1999 Jan 15;283(5400):381-7.
Zador A, Koch C, Brown TH.Biophysical model of a Hebbian synapse.Proc Natl Acad Sci U S A. 1990 Sep;87(17):6718-22.
Holmes WR, Levy WB.AbstractInsights into associative long-term potentiation from computational models of NMDA receptor-mediated calcium influx and intracellular calcium concentration changes.J Neurophysiol. 1990 May;63(5):1148-68.
There are two theoretical papers which, in my opinion, have had a strong influence on the way we think about synaptic transmission and short term plasticity today:
A W Liley and K A North. An electrical investigation of effects of repetitivestimulation on mammalian neuromuscular junction. J Neurophysiol, 16(5):509 527, Sep 1953.
W J Betz. Depression of transmitter release at the neuromuscular junction of thefrog. J Physiol, 206(3):629 644, 1970.
These were, of course, published before the term "computation neuroscience" was used. The first proposed a mathematical model for vesicle pool depletion, which is still in use today. The second was the first to extend this with the release probability as a dynamic variable. These ideas were then further popularised by these classic papers:
L F Abbott, J A Varela, K Sen, and S B Nelson. Synaptic depression and corticalgain control. Science, 275(5297):220 224, Jan 1997.
M V Tsodyks and H Markram. The neural code between neocortical pyramidalneurons depends on neurotransmitter release probability. Proc Natl Acad Sci U SA, 94(2):719 723, Jan 1997.
What I found have during my collaborations with biologists was that not so much the precise mathematical formulation, but the very basic ideas and concepts explored in these papers have made a strong impact in the whole field, and have certainly cleared the way for numerous further theoretical studies.
Another paper I have come across just recently which I would consider as rather important and useful is this:
J J Hopfield and A V M Herz. Rapid Local Synchronization of Action Potentials: Toward Computation with Coupled Integrate-and-Fire Neurons. Proc Natl Acad Sci U SA, 92(15): 6655-6662, Jul 1995.
Cited more than 150 times, it contains some strong results regarding the behaviour of recurrent networks, and also anticipates a number of results shown more recently.

Here is my top 12 papers, in chronological order. I have gone for ones that make my science heart sing, that introduce a big idea, useful tool, connect experiment and theory in a satisfying way, or are an example ofwork on a topic that has been mysteriously under-represented.
I have tried to briefly qualify why they could be thought of as classic by the wider community.
1) Willshaw and von der Malsburg (1979). Future hot topic: modelling development Excellent interaction between theory and experiment - predicted ephrins and eph receptors. Laughlin (1981) Z. Naturforsch. C 36:910-2 Big idea: coding matches stimulus statistics. Srinivasan et al. (1982) Proc. Roy. Soc. B 216(1205):427-59 Excellent interaction between theory and experiment: predicts responses of first order visual interneurons if they exploit spatial and temporal correlations to reduce redundancy.
4) Buchsbaum and Gottschalk (1983). Proc. R. Soc. B 220:89-113 Excellent interaction between theory and experiment: uses PCA to accurately calculate the colour channels that maximise information transmission. Deserves to be more widely known.

5) Bialek et al. (1991) Science Useful application for theorist: neat method for calculating stimulus filters in the response.

6) Treves and Rolls (1992) Hippocampus 2(2):189-99 Excellent interaction between theory and experiment: identified the function of the dentate gyrus in the hippocampus, and matched network organisation to function far more successfully that Marr. Van Hateren (1992) J. Comp Phys. A 171:157-170 Excellent interaction between theory and experiment: predicts visual spatiotemporal receptive fields of cells connected to photoreceptors in the fly so as to maximise information about natural images from first principles, with stunning success.

8) Wolpert et al. (1995) Science 269(5232):1880-2 Big idea: internal models and the use of priors.

9) Zemel et al. (1998) Neur. Comp. 10(2):403-30 Big idea: neurons encode distributions, not single values

10) Van Rossum et al. (2000) J. Neuro. 20(23):8812-21 Excellent interaction between theory and experiment: Simple application of Fokker-Planck equation physics to explain functional consequences to the network of cellular level experimental data.

11) Brunel (2000) J. Comp. Neuro 8:183-208 Useful application for theorist: calculations of the population activity of a network of integrate-and-fire neurons.

12) Schreiber (2000) Physical Review Letters 85(2):461-64 Future hot topic: Current best method to infer causal relationships between neurons using information theory.
Last edited by a moderator:
Biology news on
  • #2
Here are the most important papers in 3 subjects, plasticity andsimple neuron models and network dynamicsOf course, there are other categories in Computational neuroscience(detailed neuron model, cortex modeling, vision, audition etc) on which others will report.
1) In plasticity:
Hebb, 1949 (book)
Bienenstock, Cooper Munro, J. Neurosci.1982 (BCM rule)
Kohonen Neural Networks1993 (Kohonen algo in comp neuro perspective other papers of him would also do)Hopfield, PNAS, 1982 (Hopfield model)
Amit Gutfreund Sompolinksy, Phys Rev A, 1985 (Analysis of Hopfield model)
Linsker PNAS, 1986 (emergence of field)
MacKay and Miller 1990 Neural Comput. (analysis of Linskers rule)
Miller and MacKay 1994 Neural Comput. the role of constraints
Gerstner et al, Nature 1996 (first paper on STDP)
Kempter et al. Phys Rev E, 1999 (first analysis of STDP)
Lisman, PNAS, 1999 (first model of plasticity based on calcium dynamics)
Song Miller Abbott, Nat. Neurosci, 2000 (popular paper on STDP)
Rossum et al. 2000, J. Neuroscie (STDP with soft bounds for the weights)
Fusi, Biological Cybernetics, 2002 (some general problems of Hebbian rules - nice review of work of Fusi)_
Shouval et al., PNAS, 2002 (calcium model of plasticity)
Senn Tsodyks, Markram, Neural. comp. 2001 (STDP algorithm)
Fusi, Drew, Abbott Nat. Neuroscience 2005 (Cascade model)
Toyoizumi et al. PNAS 2005 (BCM rule for spiking neuron also optimized information)

2) In simplified neuron models
Lapicque 07 (often cited as first integrate-and-fire model, even though it does not show reset)
FitzHugh 1961, Biophys. Journal (2-dim neuron model)
Stein 1967, Biophys. Journal (some models of neural variability - integrate-and-fire model with noise)
Ermentrout 1996, Neural Comput., Canonical type I model, quadratic integrate-and-fire
Kistler et al. 1997, Neural Computation (systematic reduction to a threshold model/Spike Response Model)
Latham 2000, J. Neurophys. quadratic integrate-and-fire
Izhikevich 2003, IEEE, 2-dim. neuron model
Fourcaud et al. 2003, J. Neurosci. exp. integrate-and-fire model
Jolivet et al. 2006, J. comput. Neurosci. -- spiking in real neurons can be explained by threshold models
Badel et al. 2008, J. Neurophysiol. -- real neurons are exponential integrate-and-fire models, this is a very recent paper, but it is really important for the discussion of simple neuron models

3) Network dynamics
Wilson and Cowan, 1972
Amari 1974
Brunel and Hakim, 1999 Neural Computation
Gerstner 2000 Neural Computation
Brunel 2000 Comput. Neurosci

Finally, I am attaching a list of great papers. If I were trying to get outsiders excited, I'd definitely use the Andy Schwartz paper on neural prosthetics. Also think I would do Olshausen & Field as it really kicked people off on thinking about natural images. The Hopfield paper is the greatest of the bunch but is likely too old for what you're looking for. Spike-timing-dependent plasticity is a hot topic and I think carries on a great tradition of computational neuroscientists connecting cellular plasticity to larger network functions; and I think Peter Dayan (and Montague's in the original paper) work is some of the first that really puts a framework in place for thinking about neuromodulators. But they're all great, and I tried to hit many different contributions (maybe this is the greatest message--that computational neuroscience pervades so many fields from single-neuron computation to neuromodulators to models of memory).
1. Montague PR, Dayan P, Sejnowski TJ A framework for mesencephalic dopamine systems based on predictive Hebbian learning. J Neurosci. 1996 Mar 1;16(5):1936-47.Abstract: We develop a theoretical framework that shows how mesencephalic dopamine systems could distribute to their targets a signal that represents information about future expectations. In particular, we show how activity in the cerebral cortex can make predictions about future receipt of reward and how fluctuations in the activity levels of neurons in diffuse dopamine systems above and below baseline levels would represent errors in these predictions that are delivered to cortical and subcortical targets. We present a model for how such errors could be constructed in a real brain that is consistent with physiological results for a subset of dopaminergic neurons located in the ventral tegmental area and surrounding dopaminergic neurons. The theory also makes testable predictions about human choice behavior on a simple decision-making task. Furthermore, we show that, through a simple influence on synaptic plasticity, fluctuations in dopamine release can act to change the predictions in an appropriate manner.This paper is the first of a series of papers setting up a framework for how mesencephalic dopamine neurons represent reward and can serve as the basis for temporal difference -based reward learning in which the reward is offered at a delayed time.
*2. Strong, S., Koberle, R., de Ruyter van Steveninck, R. and Bialek, W. 1998. Entropy and information in neural spike trains, Physical Review Letters 80: 197-200.Abstract. The nervous system represents time dependent signals in sequences of discrete, identical action potentials or spikes; information is carried only in the spike arrival times. We show how to quantify this information, in bits, free from any assumptions about which features of the spike train or input signal are most important, and we apply this approach to the analysis of experiments on a motion sensitive neuron in the fly visual system. This neuron transmits information about the visual stimulus at rates of up to 90 bits/s, within a factor of 2 of the physical limit set by the entropy of the spike train itself.This paper ushered in a new set of techniques for characterizing spike trains using the methods of information theory, and also illustrated that there was information on much smaller time scales (~a couple ms) than had typically been assumed previously.
3a. Abbott LF, Varela JA, Sen K, Nelson SB. Synaptic depression and cortical gain control.Science. 1997 Jan 10;275(5297):220-4Abstract. Cortical neurons receive synaptic inputs from thousands of afferents that fire action potentials at rates ranging from less than 1 hertz to more than 200 hertz. Both the number of afferents and their large dynamic range can mask changes in the spatial and temporal pattern of synaptic activity, limiting the ability of a cortical neuron to respond to its inputs. Modeling work based on experimental measurements indicates that short-term depression of intracortical synapses provides a dynamic gain-control mechanism that allows equal percentage rate changes on rapidly and slowly firing afferents to produce equal postsynaptic responses. Unlike inhibitory and adaptive mechanisms that reduce responsiveness to all inputs, synaptic depression is input-specific, leading to a dramatic increase in the sensitivity of a neuron to subtle changes in the firing patterns of its afferents.

3b. Markram H, Tsodyks M. Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature. 1996 Aug 29;382(6594):807-10.Abstract. Experience-dependent potentiation and depression of synaptic strength has been proposed to subserve learning and memory by changing the gain of signals conveyed between neurons. Here we examine synaptic plasticity between individual neocortical layer-5 pyramidal neurons. We show that an increase in the synaptic response, induced by pairing action-potential activity in pre- and postsynaptic neurons, was only observed when synaptic input occurred at low frequencies. This frequency-dependent increase in synaptic responses arises because of a redistribution of the available synaptic efficacy and not because of an increase in the efficacy. Redistribution of synaptic efficacy could represent a mechanism to change the content, rather than the gain, of signals conveyed between neurons.These 2 papers connected short-term synaptic plasticity to important computational implications.
4a. Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci U S A. 1982 Apr;79(8):2554-8Abstract. Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.This classic paper illustrated the idea of attractor models and a correspondence with energy surfaces. It is now universally permeates discussions of long-term memory storage in networks, especially in the hippocampus. It was followed more recently by the article below, which expanded the idea of attractor models to continuous attractors this now is the framework for discussion of many networks storing short-term memories (the other set of models being the so-called ring models but i am not sure of the original reference for those).
4b. Seung HS. How the brain keeps the eyes still. Proc Natl Acad Sci U S A. 1996 Nov 12;93(23):13339-44.Abstract. The brain can hold the eyes still because it stores a memory of eye position. The brain's memory of horizontal eye position appears to be represented by persistent neural activity in a network known as the neural integrator, which is localized in the brainstem and cerebellum. Existing experimental data are reinterpreted as evidence for an "attractor hypothesis" that the persistent patterns of activity observed in this network form an attractive line of fixed points in its state space. Line attractor dynamics can be produced in linear or nonlinear neural networks by learning mechanisms that precisely tune positive feedback.
5a. Song S, Miller KD, Abbott LF. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat Neurosci. 2000 Sep;3(9):919-26.Abstract. Hebbian models of development and learning require both activity-dependent synaptic plasticity and a mechanism that induces competition between different synapses. One form of experimentally observed long-term synaptic plasticity, which we call spike-timing-dependent plasticity (STDP), depends on the relative timing of pre- and postsynaptic action potentials. In modeling studies, we find that this form of synaptic modification can automatically balance synaptic strengths to make postsynaptic firing irregular but more sensitive to presynaptic spike timing. It has been argued that neurons in vivo operate in such a balanced regime. Synapses modifiable by STDP compete for control of the timing of postsynaptic action potentials. Inputs that fire the postsynaptic neuron with short latency or that act in correlated groups are able to compete most successfully and develop strong synapses, while synapses of longer-latency or less-effective inputs are weakened.

5b. Song S, Abbott LF.Neuron. Cortical development and remapping through spike timing-dependent plasticity. 2001 Oct 25;32(2):339-50Abstract. Long-term modification of synaptic efficacy can depend on the timing of pre- and postsynaptic action potentials. In model studies, such spike timing-dependent plasticity (STDP) introduces the desirable features of competition among synapses and regulation of postsynaptic firing characteristics. STDP strengthens synapses that receive correlated input, which can lead to the formation of stimulus-selective columns and the development, refinement, and maintenance of selectivity maps in network models. The temporal asymmetry of STDP suppresses strong destabilizing self-excitatory loops and allows a group of neurons that become selective early in development to direct other neurons to become similarly selective. STDP, acting alone without further hypothetical global constraints or additional forms of plasticity, can also reproduce the remapping seen in adult cortex following afferent lesions.The papers above have been seminal in illustrating the implications for learning of spike-timing-dependent synaptic plasticity
6. Polsky A, Mel BW, Schiller J. Nat Neurosci. 2004 Jun;7(6):621-7. Epub 2004 May 23.Computational subunits in thin dendrites of pyramidal cells.Abstract. The thin basal and oblique dendrites of cortical pyramidal neurons receive most of the synaptic inputs from other cells, but their integrative properties remain uncertain. Previous studies have most often reported global linear or sublinear summation. An alternative view, supported by biophysical modeling studies, holds that thin dendrites provide a layer of independent computational 'subunits' that sigmoidally modulate their inputs prior to global summation. To distinguish these possibilities, we combined confocal imaging and dual-site focal synaptic stimulation of identified thin dendrites in rat neocortical pyramidal neurons. We found that nearby inputs on the same branch summed sigmoidally, whereas widely separated inputs or inputs to different branches summed linearly. This strong spatial compartmentalization effect is incompatible with a global summation rule and provides the first experimental support for a two-layer 'neural network' model of pyramidal neuron thin-branch integration. Our findings could have important implications for the computing and memory-related functions of cortical tissue.This paper, as well as previous theoretical work, suggests that dendrites might enable single neurons to behave as feedforward neural networks.
7. Medina JF, Nores WL, Mauk MD. Nature. 2002 Mar 21;416(6878):330-3.Inhibition of climbing fibres is a signal for the extinction of conditioned eyelid responses.Abstract. A fundamental tenet of cerebellar learning theories asserts that climbing fibre afferents from the inferior olive provide a teaching signal that promotes the gradual adaptation of movements. Data from several forms of motor learning provide support for this tenet. In pavlovian eyelid conditioning, for example, where a tone is repeatedly paired with a reinforcing unconditioned stimulus like periorbital stimulation, the unconditioned stimulus promotes acquisition of conditioned eyelid responses by activating climbing fibres. Climbing fibre activity elicited by an unconditioned stimulus is inhibited during the expression of conditioned responses-consistent with the inhibitory projection from the cerebellum to inferior olive. Here, we show that inhibition of climbing fibres serves as a teaching signal for extinction, where learning not to respond is signalled by presenting a tone without the unconditioned stimulus. We used reversible infusion of synaptic receptor antagonists to show that blocking inhibitory input to the climbing fibres prevents extinction of the conditioned response, whereas blocking excitatory input induces extinction. These results, combined with analysis of climbing fibre activity in a computer simulation of the cerebellar-olivary system, suggest that transient inhibition of climbing fibres below their background level is the signal that drives extinction.This is one of several computational studies by Mauk and collaborators that are enhancing our knowledge of cerebellar processing (also see similar papers by Raymond & Lisberger applied to the VOR).
8. Olshausen BA, Field DJ. Nature. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. 1996 Jun 13;381(6583):607-9Abstract.The receptive fields of simple cells in mammalian primary visual cortex can be characterized as being spatially localized, oriented and bandpass (selective to structure at different spatial scales), comparable to the basis functions of wavelet transforms. One approach to understanding such response properties of visual neurons has been to consider their relationship to the statistical structure of natural images in terms of efficient coding. Along these lines, a number of studies have attempted to train unsupervised learning algorithms on natural images in the hope of developing receptive fields with similar properties, but none has succeeded in producing a full set that spans the image space and contains all three of the above properties. Here we investigate the proposal that a coding strategy that maximizes sparseness is sufficient to account for these properties. We show that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex. The resulting sparse image code provides a more efficient representation for later stages of processing because it possesses a higher degree of statistical independence among its outputs.This now classic study suggests how the statistical structure of natural images may determine the response properties of V1 cells, and set the stage for many later studies discussing the concept of sparse coding of images.
9. Taylor DM, Tillery SI, Schwartz AB. Direct cortical control of 3D neuroprosthetic devices. Science. 2002 Jun 7;296(5574):1829-32. Abstract. Three-dimensional (3D) movement of neuroprosthetic devices can be controlled by the activity of cortical neurons when appropriate algorithms are used to decode intended movement in real time. Previous studies assumed that neurons maintain fixed tuning properties, and the studies used subjects who were unaware of the movements predicted by their recorded units. In this study, subjects had real-time visual feedback of their brain-controlled trajectories. Cell tuning properties changed when used for brain-controlled movements. By using control algorithms that track these changes, subjects made long sequences of 3D movements using far fewer cortical units than expected. Daily practice improved movement accuracy and the directional tuning of these units.This represents some of the seminal work decoding cortical activity to control neural prosthetics.
10. Van Vreeswijk C, Abbott LF, Ermentrout GB. When inhibition not excitation synchronizes neural firing. J Comput Neurosci. 1994 Dec;1(4):313-21.Abstract. Excitatory and inhibitory synaptic coupling can have counter-intuitive effects on the synchronization of neuronal firing. While it might appear that excitatory coupling would lead to synchronization, we show that frequently inhibition rather than excitation synchronizes firing. We study two identical neurons described by integrate-and-fire models, general phase-coupled models or the Hodgkin-Huxley model with mutual, non-instantaneous excitatory or inhibitory synapses between them. We find that if the rise time of the synapse is longer than the duration of an action potential, inhibition not excitation leads to synchronized firing.

If I were to update, I think I would add papers from:
1) Neuroeconomics & Reinforcement Learning-- in addition to the seminal work by Dayan & Schultz (already in attached), perhaps Loewenstein/Seung paper on matching behavior as a generic consequence of correlational learning rules.
2) Bayesian networks -- maybe Ma, Beck, Latham, Pouget or others on idea that the brain may encode & compute with probabilities
  • #3

As a theoretical neuroscientist, I am biased towards computational models that have predictive power. So, here are some papers that I think have been influential in this regard. This is by no means a comprehensive list and I have tried to include papers from different areas of computational neuroscience.

1. R. Dawkins. The Selfish Gene. Oxford University Press, 1976.
2. D. Marr. Vision. W. H. Freeman, 1982.
3. T. Poggio and S. Amari. Learning and generalization in neural networks. In E. Domany, J. L. van Hemmen, and K. Schulten, editors, Models of Neural Networks II, pages 147-186. Springer-Verlag, 1994.
4. W. Bialek, F. Rieke, R. R. de Ruyter van Steveninck, and D. Warland. Reading a neural code. Science, 252:1854-1857, 1991.
5. J. J. Hopfield and A. V. M. Herz. Rapid local synchronization of action potentials: Toward computation with coupled integrate-and-fire neurons. Proc. Natl. Acad. Sci. USA, 92:6655-6662, 1995.
6. S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(6):721-741, 1984.
7. N. Brunel. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8:183-208, 2000.
8. A. Zador, C. Koch, and T. Brown. A biophysical model of a Hebbian synapse. Proc. Natl. Acad. Sci. USA, 87:6718-6722, 1990.
9. E. D. Siggia. Hysteresis and the dynamics of neurobiological switches. Journal of Theoretical Biology, 173:195-204, 1995.
10. R. J. Douglas, K. A. C. Martin, and D. Whitteridge. A canonical microcircuit for neocortex. Neural Computation, 1:480-488, 1989.

Related to Key papers in computational neuroscience

1. What is computational neuroscience?

Computational neuroscience is a field that combines neuroscience, computer science, and mathematics to study the brain and its functions. It uses computational models to simulate brain processes and understand how the brain works.

2. What are key papers in computational neuroscience?

Key papers in computational neuroscience are influential and groundbreaking research studies that have significantly advanced our understanding of the brain and its functions. They often introduce new theories, models, and methods that shape the direction of the field.

3. How are key papers in computational neuroscience selected?

Key papers in computational neuroscience are selected based on their impact, significance, and contribution to the field. They are often cited by other researchers and are considered to be major milestones in the progress of computational neuroscience.

4. Can non-scientists understand key papers in computational neuroscience?

While some key papers in computational neuroscience may be difficult for non-scientists to understand, there are many resources available online that provide simplified explanations and interpretations of these papers. Additionally, many key papers have been written in a way that is accessible to a wider audience.

5. How can key papers in computational neuroscience benefit society?

Key papers in computational neuroscience can benefit society in many ways. They can provide insights into brain disorders and diseases, aid in the development of new treatments and therapies, and improve our understanding of human behavior and cognition. They also have the potential to inspire new technologies and innovations in various industries.

Similar threads

  • STEM Academic Advising
  • Topology and Analysis
  • Biology and Medical
  • Biology and Medical
  • Biology and Medical
  • STEM Academic Advising
  • Quantum Physics
  • Beyond the Standard Models