Bio


Kwabena Boahen is a Professor of Bioengineering and of Electrical Engineering at Stanford University, with a courtesy appointment in Computer Science, and an investigator in the Bio-X Institute, the System X Alliance, and the Wu Tsai Neurosciences Institute. He founded the Brains in Silicon Lab at Stanford to link neuronal biophysics to cognitive behavior through computational modeling and to emulate the brain with silicon chips through neuromorphic engineering. His interest in neural networks developed soon after he left his native Ghana to pursue undergraduate studies in Electrical and Computer Engineering at Johns Hopkins University, Baltimore, in 1985. He went on to earn a doctorate in Computation and Neural Systems at the California Institute of Technology in 1997. From 1997 to 2005 he was on the faculty of University of Pennsylvania, Philadelphia PA, where he was the inaugural holder of the Skirkanich Term Junior Chair. His research has resulted in over a hundred publications, including a cover story in Scientific American featuring his lab’s work on a silicon retina and a silicon tectum that “wire together” automatically (May 2005). He has been invited to give over a hundred seminar, plenary, and keynote talks, including a 2007 TED talk, “A computer that works like the brain”, with over seven hundred thousand views. He has received several distinguished honors, including a Packard Fellowship for Science and Engineering (1999) and a National Institutes of Health Director’s Pioneer Award (2006). He was elected a fellow of the American Institute for Medical and Biological Engineering (2016) and of the Institute of Electrical and Electronic Engineers (2016) in recognition of his lab’s work on Neurogrid, an iPad-size platform that emulates the cerebral cortex in biophysical detail and at functional scale, a combination that hitherto required a supercomputer. He has led several multi-university, multi-investigator research efforts, including one that raised the level of abstraction at which neuromorphic chips are ‘programmed’ by co-designing hardware and software (Brainstorm Project). A spin-out from his Stanford lab, Femtosense Inc (2018), is commercializing this breakthrough.

Academic Appointments


Honors & Awards


  • Fellow, American Institute for Medical and Biological Engineering (2016)
  • Fellow, Institute of Electrical and Electronic Engineers (2016)
  • NIH Director's Pioneer Award, National Institute of Health (2006-2011)
  • NIH Director's Transformative Research Award, National Institute of Health (2011-2016)
  • Young Investigator Award, Office of Naval Research (2002-2005)
  • Faculty Early Career Award, National Science Foundation (2001-2006)
  • Fellowship in Science and Engineering, Packard Foundation (1999-2004)

Professional Education


  • PhD, Caltech (1997)

Current Research and Scholarly Interests


Boahen's group analyzes neural behavior computationally to elucidate principles of neural design at the cellular, circuit, and systems levels; and synthesizes neuromorphic electronic systems that scale energy-use with size as efficiently as the brain does. This interdisciplinary research program bridges neurobiology and medicine with electronics and computer science, bringing together these seemingly disparate fields.

2023-24 Courses


Stanford Advisees


Graduate and Fellowship Programs


All Publications


  • Catalyzing next-generation Artificial Intelligence through NeuroAI. Nature communications Zador, A., Escola, S., Richards, B., Olveczky, B., Bengio, Y., Boahen, K., Botvinick, M., Chklovskii, D., Churchland, A., Clopath, C., DiCarlo, J., Ganguli, S., Hawkins, J., Kording, K., Koulakov, A., LeCun, Y., Lillicrap, T., Marblestone, A., Olshausen, B., Pouget, A., Savin, C., Sejnowski, T., Simoncelli, E., Solla, S., Sussillo, D., Tolias, A. S., Tsao, D. 2023; 14 (1): 1597

    Abstract

    Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate progress in AI, we must invest in fundamental research in NeuroAI. A core component of this is the embodied Turing test, which challenges AI animal models to interact with the sensorimotor world at skill levels akin to their living counterparts. The embodied Turing test shifts the focus from those capabilities like game playing and language that are especially well-developed or uniquely human to those capabilities - inherited from over 500 million years of evolution - that are shared with all animals. Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI.

    View details for DOI 10.1038/s41467-023-37180-x

    View details for PubMedID 36949048

  • Dendrocentric learning for synthetic intelligence. Nature Boahen, K. 2022; 612 (7938): 43-50

    Abstract

    Artificial intelligence now advances by performing twice as many floating-point multiplications every two months, but the semiconductor industry tiles twice as many multipliers on a chip every two years. Moreover, the returns from tiling these multipliers ever more densely now diminish because signals must travel relatively farther and farther. Although travel can be shortened by stacking tiled multipliers in a three-dimensional chip, such a solution acutely reduces the available surface area for dissipating heat. Here I propose to transcend this three-dimensional thermal constraint by moving away from learning with synapses to learning with dendrites. Synaptic inputs are not weighted precisely but rather ordered meticulously along a short stretch of dendrite, termed dendrocentric learning. With the help of a computational model of a dendrite and a conceptual model of a ferroelectric device that emulates it, I illustrate how dendrocentric learning artificial intelligence-or synthetic intelligence for short-could run not with megawatts in the cloud but rather with watts on a smartphone.

    View details for DOI 10.1038/s41586-022-05340-6

    View details for PubMedID 36450907

    View details for PubMedCentralID 7211396

  • Cortical state dynamics and selective attention define the spatial pattern of correlated variability in neocortex. Nature communications Shi, Y., Steinmetz, N. A., Moore, T., Boahen, K., Engel, T. A. 1800; 13 (1): 44

    Abstract

    Correlated activity fluctuations in the neocortex influence sensory responses and behavior. Neural correlations reflect anatomical connectivity but also change dynamically with cognitive states such as attention. Yet, the network mechanisms defining the population structure of correlations remain unknown. We measured correlations within columns in the visual cortex. We show that the magnitude of correlations, their attentional modulation, and dependence on lateral distance are explained by columnar On-Off dynamics, which are synchronous activity fluctuations reflecting cortical state. We developed a network model in which the On-Off dynamics propagate across nearby columns generating spatial correlations with the extent controlled by attentional inputs. This mechanism, unlike previous proposals, predicts spatially non-uniform changes in correlations during attention. We confirm this prediction in our columnar recordings by showing that in superficial layers the largest changes in correlations occur at intermediate lateral distances. Our results reveal how spatially structured patterns of correlated variability emerge through interactions of cortical state dynamics, anatomical connectivity, and attention.

    View details for DOI 10.1038/s41467-021-27724-4

    View details for PubMedID 35013259

  • Braindrop: A Mixed-Signal Neuromorphic Architecture With a Dynamical Systems-Based Programming Model PROCEEDINGS OF THE IEEE Neckar, A., Fok, S., Benjamin, B., Stewart, T. C., Oza, N. N., Voelker, A. R., Eliasmith, C., Manohar, R., Boahen, K. 2019; 107 (1): 144–64
  • A Neuromorph's Prospectus COMPUTING IN SCIENCE & ENGINEERING Boahen, K. 2017; 19 (2): 14-15
  • Selective modulation of cortical state during spatial attention SCIENCE Engel, T. A., Steinmetz, N. A., Gieselmann, M. A., Thiele, A., Moore, T., Boahen, K. 2016; 354 (6316): 1140-1144

    Abstract

    Neocortical activity is permeated with endogenously generated fluctuations, but how these dynamics affect goal-directed behavior remains a mystery. We found that ensemble neural activity in primate visual cortex spontaneously fluctuated between phases of vigorous (On) and faint (Off) spiking synchronously across cortical layers. These On-Off dynamics, reflecting global changes in cortical state, were also modulated at a local scale during selective attention. Moreover, the momentary phase of local ensemble activity predicted behavioral performance. Our results show that cortical state is controlled locally within a cortical map according to cognitive demands and reveal the impact of these local changes in cortical state on goal-directed behavior.

    View details for DOI 10.1126/science.aag1420

    View details for Web of Science ID 000388916400040

    View details for PubMedID 27934763

  • Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations PROCEEDINGS OF THE IEEE Benjamin, B. V., Gao, P., Mcquinn, E., Choudhary, S., Chandrasekaran, A. R., Bussat, J., Alvarez-Icaza, R., Arthur, J. V., Merolla, P. A., Boahen, K. 2014; 102 (5): 699-716
  • Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces. Journal of neural engineering Dethier, J., Nuyujukian, P., Ryu, S. I., Shenoy, K. V., Boahen, K. 2013; 10 (3): 036008-?

    Abstract

    Objective. Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. Approach. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Main results. Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system's robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. Significance. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.

    View details for DOI 10.1088/1741-2560/10/3/036008

    View details for PubMedID 23574919

  • Dynamical System Guided Mapping of Quantitative Neuronal Models Onto Neuromorphic Hardware IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS Gao, P., Benjamin, B. V., Boahen, K. 2012; 59 (10): 2383-2394
  • Silicon-Neuron Design: A Dynamical Systems Approach. IEEE transactions on circuits and systems. I, Regular papers : a publication of the IEEE Circuits and Systems Society Arthur, J. V., Boahen, K. 2011; 58 (5): 1034-1043

    Abstract

    We present an approach to design spiking silicon neurons based on dynamical systems theory. Dynamical systems theory aids in choosing the appropriate level of abstraction, prescribing a neuron model with the desired dynamics while maintaining simplicity. Further, we provide a procedure to transform the prescribed equations into subthreshold current-mode circuits. We present a circuit design example, a positive-feedback integrate-and-fire neuron, fabricated in 0.25 μm CMOS. We analyze and characterize the circuit, and demonstrate that it can be configured to exhibit desired behaviors, including spike-frequency adaptation and two forms of bursting.

    View details for DOI 10.1109/TCSI.2010.2089556

    View details for PubMedID 21617741

    View details for PubMedCentralID PMC3100558

  • Synchrony in silicon: The gamma rhythm IEEE TRANSACTIONS ON NEURAL NETWORKS Arthur, J. V., Boahen, K. A. 2007; 18 (6): 1815-1825

    Abstract

    In this paper, we present a network of silicon interneurons that synchronize in the gamma frequency range (20-80 Hz). The gamma rhythm strongly influences neuronal spike timing within many brain regions, potentially playing a crucial role in computation. Yet it has largely been ignored in neuromorphic systems, which use mixed analog and digital circuits to model neurobiology in silicon. Our neurons synchronize by using shunting inhibition (conductance based) with a synaptic rise time. Synaptic rise time promotes synchrony by delaying the effect of inhibition, providing an opportune period for interneurons to spike together. Shunting inhibition, through its voltage dependence, inhibits interneurons that spike out of phase more strongly (delaying the spike further), pushing them into phase (in the next cycle). We characterize the interneuron, which consists of soma (cell body) and synapse circuits, fabricated in a 0.25-microm complementary metal-oxide-semiconductor (CMOS). Further, we show that synchronized interneurons (population of 256) spike with a period that is proportional to the synaptic rise time. We use these interneurons to entrain model excitatory principal neurons and to implement a form of object binding.

    View details for DOI 10.1109/TNN.2007.900238

    View details for Web of Science ID 000250789100019

    View details for PubMedID 18051195

  • Expandable networks for neuromorpbic chips (vol 54, pg 301, 2007) IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS Merolla, P. A., Arthur, J. V., Shi, B. E., Boahen, K. A. 2007; 54 (4): 925-926
  • Thermodynamically equivalent silicon models of voltage-dependent ion channels NEURAL COMPUTATION Hynna, K. M., Boahen, K. 2007; 19 (2): 327-350

    Abstract

    We model ion channels in silicon by exploiting similarities between the thermodynamic principles that govern ion channels and those that govern transistors. Using just eight transistors, we replicate--for the first time in silicon--the sigmoidal voltage dependence of activation (or inactivation) and the bell-shaped voltage-dependence of its time constant. We derive equations describing the dynamics of our silicon analog and explore its flexibility by varying various parameters. In addition, we validate the design by implementing a channel with a single activation variable. The design's compactness allows tens of thousands of copies to be built on a single chip, facilitating the study of biologically realistic models of neural computation at the network level in silicon.

    View details for Web of Science ID 000243524000002

    View details for PubMedID 17206867

  • A silicon retina that reproduces signals in the optic nerve JOURNAL OF NEURAL ENGINEERING Zaghloul, K. A., Boahen, K. 2006; 3 (4): 257-267

    Abstract

    Prosthetic devices may someday be used to treat lesions of the central nervous system. Similar to neural circuits, these prosthetic devices should adapt their properties over time, independent of external control. Here we describe an artificial retina, constructed in silicon using single-transistor synaptic primitives, with two forms of locally controlled adaptation: luminance adaptation and contrast gain control. Both forms of adaptation rely on local modulation of synaptic strength, thus meeting the criteria of internal control. Our device is the first to reproduce the responses of the four major ganglion cell types that drive visual cortex, producing 3600 spiking outputs in total. We demonstrate how the responses of our device's ganglion cells compare to those measured from the mammalian retina. Replicating the retina's synaptic organization in our chip made it possible to perform these computations using a hundred times less energy than a microprocessor-and to match the mammalian retina in size and weight. With this level of efficiency and autonomy, it is now possible to develop fully implantable intraocular prostheses.

    View details for DOI 10.1088/1741-2560/3/4/002

    View details for Web of Science ID 000243122900004

    View details for PubMedID 17124329

  • A Low Thermal Sensitivity Subthreshold-Current to Pulse-Frequency Converter for Neuromorphic Chips IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS Benjamin, B., Smith, R. L., Boahen, K. A. 2023; 13 (4): 956-964
  • An Analytical MOS Device Model With Mismatch and Temperature Variation for Subthreshold Circuits IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS Benjamin, B., Smith, R. L., Boahen, K. A. 2023; 70 (6): 1826-1830
  • Optimal noise level for coding with tightly balanced networks of spiking neurons in the presence of transmission delays. PLoS computational biology Timcheck, J., Kadmon, J., Boahen, K., Ganguli, S. 2022; 18 (10): e1010593

    Abstract

    Neural circuits consist of many noisy, slow components, with individual neurons subject to ion channel noise, axonal propagation delays, and unreliable and slow synaptic transmission. This raises a fundamental question: how can reliable computation emerge from such unreliable components? A classic strategy is to simply average over a population of N weakly-coupled neurons to achieve errors that scale as [Formula: see text]. But more interestingly, recent work has introduced networks of leaky integrate-and-fire (LIF) neurons that achieve coding errors that scale superclassically as 1/N by combining the principles of predictive coding and fast and tight inhibitory-excitatory balance. However, spike transmission delays preclude such fast inhibition, and computational studies have observed that such delays can cause pathological synchronization that in turn destroys superclassical coding performance. Intriguingly, it has also been observed in simulations that noise can actually improve coding performance, and that there exists some optimal level of noise that minimizes coding error. However, we lack a quantitative theory that describes this fascinating interplay between delays, noise and neural coding performance in spiking networks. In this work, we elucidate the mechanisms underpinning this beneficial role of noise by deriving analytical expressions for coding error as a function of spike propagation delay and noise levels in predictive coding tight-balance networks of LIF neurons. Furthermore, we compute the minimal coding error and the associated optimal noise level, finding that they grow as power-laws with the delay. Our analysis reveals quantitatively how optimal levels of noise can rescue neural coding performance in spiking neural networks with delays by preventing the build up of pathological synchrony without overwhelming the overall spiking dynamics. This analysis can serve as a foundation for the further study of precise computation in the presence of noise and delays in efficient spiking neural circuits.

    View details for DOI 10.1371/journal.pcbi.1010593

    View details for PubMedID 36251693

  • PinT: Polynomial in Temperature Decode Weights in a Neuromorphic Architecture Reid, S., Montoya, A., Boahen, K., IEEE IEEE. 2019: 60–65
  • Optimizing an Analog Neuron Circuit Design for Nonlinear Function Approximation Neckar, A., Stewart, T. C., Benjamin, B. V., Boahen, K., IEEE IEEE. 2018
  • Live Demonstration: Optimizing an Analog Neuron Circuit Design for Nonlinear Function Approximation Neckar, A., Stewart, T., Benjamin, B., Boahen, K., IEEE IEEE. 2018
  • A Population-Level Approach to Temperature Robustness in Neuromorphic Systems Kauderer-Abrams, E., Gilbert, A., Voelker, A., Benjamin, B., Stewart, T. C., Boahen, K., IEEE IEEE. 2017: 2723–26
  • Stochastic and Adversarial Online Learning without Hyperparameters Cutkosky, A., Boahen, K., Guyon, Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2017
  • Extending the Neural Engineering Framework for Nonideal Silicon Synapses Voelker, A. R., Benjamin, B. V., Stewart, T. C., Boahen, K., Eliasmith, C., IEEE IEEE. 2017: 2086–89
  • Calibrating Silicon-Synapse Dynamics using Time-Encoding and Decoding Machines Kauderer-Abrams, E., Boahen, K., IEEE IEEE. 2017: 2525–28
  • A Multicast Tree Router for Multichip Neuromorphic Systems IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS Merolla, P., Arthur, J., Alvarez, R., Bussat, J., Boahen, K. 2014; 61 (3): 820-833
  • Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem JOURNAL OF NEUROPHYSIOLOGY Wittig, J. H., Boahen, K. 2013; 110 (2): 307-321

    Abstract

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (g(KL)) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking g(KL) in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced g(KL) with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of g(KL) were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability.

    View details for DOI 10.1152/jn.00433.2012

    View details for Web of Science ID 000321843800004

    View details for PubMedID 23554436

    View details for PubMedCentralID PMC3727074

  • Inferior olive mirrors joint dynamics to implement an inverse controller BIOLOGICAL CYBERNETICS Alvarez-Icaza, R., Boahen, K. 2012; 106 (8-9): 429-439

    Abstract

    To produce smooth and coordinated motion, our nervous systems need to generate precisely timed muscle activation patterns that, due to axonal conduction delay, must be generated in a predictive and feedforward manner. Kawato proposed that the cerebellum accomplishes this by acting as an inverse controller that modulates descending motor commands to predictively drive the spinal cord such that the musculoskeletal dynamics are canceled out. This and other cerebellar theories do not, however, account for the rich biophysical properties expressed by the olivocerebellar complex's various cell types, making these theories difficult to verify experimentally. Here we propose that a multizonal microcomplex's (MZMC) inferior olivary neurons use their subthreshold oscillations to mirror a musculoskeletal joint's underdamped dynamics, thereby achieving inverse control. We used control theory to map a joint's inverse model onto an MZMC's biophysics, and we used biophysical modeling to confirm that inferior olivary neurons can express the dynamics required to mirror biomechanical joints. We then combined both techniques to predict how experimentally injecting current into the inferior olive would affect overall motor output performance. We found that this experimental manipulation unmasked a joint's natural dynamics, as observed by motor output ringing at the joint's natural frequency, with amplitude proportional to the amount of current. These results support the proposal that the cerebellum-in particular an MZMC-is an inverse controller; the results also provide a biophysical implementation for this controller and allow one to make an experimentally testable prediction.

    View details for DOI 10.1007/s00422-012-0498-2

    View details for Web of Science ID 000309222000001

    View details for PubMedID 22890817

  • A Superposable Silicon Synapse with Programmable Reversal Potential 34th Annual International Conference of the IEEE Engineering-in-Medicine-and-Biology-Society (EMBS) Benjamin, B. V., Arthur, J. V., Gao, P., Merolla, P., Boahen, K. IEEE. 2012: 771–774

    Abstract

    We present a novel log-domain silicon synapse designed for subthreshold analog operation that emulates common synaptic interactions found in biology. Our circuit models the dynamic gating of ion-channel conductances by emulating the processes of neurotransmitter release-reuptake and receptor binding-unbinding in a superposable fashion: Only a single circuit is required to model the entire population of synapses (of a given type) that a biological neuron receives. Unlike previous designs, which are strictly excitatory or inhibitory, our silicon synapse implements-for the first time in the log-domain-a programmable reversal potential (i.e., driving force). To demonstrate our design's scalability, we fabricated in 180nm CMOS an array of 64K silicon neurons, each with four independent superposable synapse circuits occupying 11.0×21.5 µm(2) apiece. After verifying that these synapses have the predicted effect on the neurons' spike rate, we explored a recurrent network where the synapses' reversal potentials are set near the neurons' threshold, acting as shunts. These shunting synapses synchronized neuronal spiking more robustly than nonshunting synapses, confirming that reversal potentials can have important network-level implications.

    View details for PubMedID 23366006

  • Deep cerebellar neurons mirror the spinal cord's gain to implement an inverse controller BIOLOGICAL CYBERNETICS Alvarez-Icaza, R., Boahen, K. 2011; 105 (1): 29-40

    Abstract

    Smooth and coordinated motion requires precisely timed muscle activation patterns, which due to biophysical limitations, must be predictive and executed in a feed-forward manner. In a previous study, we tested Kawato's original proposition, that the cerebellum implements an inverse controller, by mapping a multizonal microcomplex's (MZMC) biophysics to a joint's inverse transfer function and showing that inferior olivary neuron may use their intrinsic oscillations to mirror a joint's oscillatory dynamics. Here, to continue to validate our mapping, we propose that climbing fiber input into the deep cerebellar nucleus (DCN) triggers rebounds, primed by Purkinje cell inhibition, implementing gain on IO's signal to mirror the spinal cord reflex's gain thereby achieving inverse control. We used biophysical modeling to show that Purkinje cell inhibition and climbing fiber excitation interact in a multiplicative fashion to set DCN's rebound strength; where the former primes the cell for rebound by deinactivating its T-type Ca2(+) channels and the latter triggers the channels by rapidly depolarizing the cell. We combined this result with our control theory mapping to predict how experimentally injecting current into DCN will affect overall motor output performance, and found that injecting current will proportionally scale the output and unmask the joint's natural response as observed by motor output ringing at the joint's natural frequency. Experimental verification of this prediction will lend support to a MZMC as a joint's inverse controller and the role we assigned underlying biophysical principles that enable it.

    View details for DOI 10.1007/s00422-011-0448-4

    View details for Web of Science ID 000295739400003

    View details for PubMedID 21789607

  • Space coding by gamma oscillations in the barn owl optic tectum JOURNAL OF NEUROPHYSIOLOGY Sridharan, D., Boahen, K., Knudsen, E. I. 2011; 105 (5): 2005-2017

    Abstract

    Gamma-band (25-140 Hz) oscillations of the local field potential (LFP) are evoked by sensory stimuli in the mammalian forebrain and may be strongly modulated in amplitude when animals attend to these stimuli. The optic tectum (OT) is a midbrain structure known to contribute to multimodal sensory processing, gaze control, and attention. We found that presentation of spatially localized stimuli, either visual or auditory, evoked robust gamma oscillations with distinctive properties in the superficial (visual) layers and in the deep (multimodal) layers of the owl's OT. Across layers, gamma power was tuned sharply for stimulus location and represented space topographically. In the superficial layers, induced LFP power peaked strongly in the low-gamma band (25-90 Hz) and increased gradually with visual contrast across a wide range of contrasts. Spikes recorded in these layers included presumptive axonal (input) spikes that encoded stimulus properties nearly identically with gamma oscillations and were tightly phase locked with the oscillations, suggesting that they contribute to the LFP oscillations. In the deep layers, induced LFP power was distributed across the low and high (90-140 Hz) gamma-bands and tended to reach its maximum value at relatively low visual contrasts. In these layers, gamma power was more sharply tuned for stimulus location, on average, than were somatic spike rates, and somatic spikes synchronized with gamma oscillations. Such gamma synchronized discharges of deep-layer neurons could provide a high-resolution temporal code for signaling the location of salient sensory stimuli.

    View details for DOI 10.1152/jn.00965.2010

    View details for Web of Science ID 000290710300006

    View details for PubMedID 21325681

    View details for PubMedCentralID PMC3094170

  • A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm. Advances in neural information processing systems Dethier, J., Nuyujukian, P., Eliasmith, C., Stewart, T., Elassaad, S. A., Shenoy, K. V., Boahen, K. 2011; 2011: 2213-2221

    Abstract

    Motor prostheses aim to restore function to disabled patients. Despite compelling proof of concept systems, barriers to clinical translation remain. One challenge is to develop a low-power, fully-implantable system that dissipates only minimal power so as not to damage tissue. To this end, we implemented a Kalman-filter based decoder via a spiking neural network (SNN) and tested it in brain-machine interface (BMI) experiments with a rhesus monkey. The Kalman filter was trained to predict the arm's velocity and mapped on to the SNN using the Neural Engineering Framework (NEF). A 2,000-neuron embedded Matlab SNN implementation runs in real-time and its closed-loop performance is quite comparable to that of the standard Kalman filter. The success of this closed-loop decoder holds promise for hardware SNN implementations of statistical signal processing algorithms on neuromorphic chips, which may offer power savings necessary to overcome a major obstacle to the successful clinical translation of neural motor prostheses.

    View details for PubMedID 25309106

  • Spiking Neural Network Decoder for Brain-Machine Interfaces. International IEEE/EMBS Conference on Neural Engineering : [proceedings]. International IEEE EMBS Conference on Neural Engineering Dethier, J., Gilja, V., Nuyujukian, P., Elassaad, S. A., Shenoy, K. V., Boahen, K. 2011

    Abstract

    We used a spiking neural network (SNN) to decode neural data recorded from a 96-electrode array in premotor/motor cortex while a rhesus monkey performed a point-to-point reaching arm movement task. We mapped a Kalman-filter neural prosthetic decode algorithm developed to predict the arm's velocity on to the SNN using the Neural Engineering Framework and simulated it using Nengo, a freely available software package. A 20,000-neuron network matched the standard decoder's prediction to within 0.03% (normalized by maximum arm velocity). A 1,600-neuron version of this network was within 0.27%, and run in real-time on a 3GHz PC. These results demonstrate that a SNN can implement a statistical signal processing algorithm widely used as the decoder in high-performance neural prostheses (Kalman filter), and achieve similar results with just a few thousand neurons. Hardware SNN implementations-neuromorphic chips-may offer power savings, essential for realizing fully-implantable cortically controlled prostheses.

    View details for PubMedID 24352611

  • Neuromorphic silicon neuron circuits FRONTIERS IN NEUROSCIENCE Indiveri, G., Linares-Barranco, B., Hamilton, T. J., van Schaik, A., Etienne-Cummings, R., Delbruck, T., Liu, S., Dudek, P., Hafliger, P., Renaud, S., Schemmel, J., Cauwenberghs, G., Arthur, J., Hynna, K., Folowosele, F., Saighi, S., Serrano-Gotarredona, T., Wijekoon, J., Wang, Y., Boahen, K. 2011; 5
  • Spiking Neural Network Decoder for Brain-Machine Interfaces 5th International IEEE Engineering-in-Medicine-and-Biology-Society (EMBS) Conference on Neural Engineering (NER) Dethier, J., Gilja, V., Nuyujukian, P., Elassaad, S. A., Shenoy, K. V., Boahen, K. IEEE. 2011: 396–399
  • A 1-change-in-4 Delay-Insensitive Interchip Link International Symposium on Circuits and Systems Nano-Bio Circuit Fabrics and Systems (ISCAS 2010) Chandrasekaran, A., Boahen, K. IEEE. 2010: 3216–3219
  • A Silicon Cochlea With Active Coupling IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS Wen, B., Boahen, K. 2009; 3 (6): 444-455

    Abstract

    We present a mixed-signal very-large-scale-integrated chip that emulates nonlinear active cochlear signal processing. Modeling the cochlea's micromechanics, including outer hair cell (OHC) electromotility, this silicon (Si) cochlea features active coupling between neighboring basilar membrane (BM) segments-a first. Neighboring BM segments, each implemented as a class AB log-domain second-order section, exchange currents representing OHC forces. This novel active-coupling architecture overcomes the major shortcomings of existing cascade and parallel filter-bank architectures, while achieving the highest number of digital outputs in an Si cochlea to date. An active-coupling architecture Si cochlea with 360 frequency channels and 2160 pulse-stream outputs occupies 10.9 mm(2) in a five-metal 1-poly 0.25-mum CMOS process. The chip's responses resemble that of a living cochlea's: Frequency responses become larger and more sharply tuned when active coupling is turned on. For instance, gain increases by 18 dB and Q 10 increases from 0.45 to 1.14. This enhancement decreases with increasing input intensity, realizing frequency-selective automatic gain control. Further work is required to improve performance by reducing large variations from tap to tap.

    View details for DOI 10.1109/TBCAS.2009.2027127

    View details for Web of Science ID 000274195300012

    View details for PubMedID 23853292

  • Nonlinear Influence of T-Channels in an in silico Relay Neuron IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING Hynna, K. M., Boahen, K. A. 2009; 56 (6): 1734-1743

    Abstract

    Thalamic relay cells express distinctive response modes based on the state of a low-threshold calcium channel (T-channel). When the channel is fully active (burst mode), the cell responds to inputs with a high-frequency burst of spikes; with the channel inactive ( tonic mode), the cell responds at a rate proportional to the input. Due to the T-channel's dynamics, we expect the cell's response to become more nonlinear as the channel becomes more active. To test this hypothesis, we study the response of an in silico relay cell to Poisson spike trains. We first validate our model cell by comparing its responses with in vitro responses. To characterize the model cell's nonlinearity, we calculate Poisson kernels, an approach akin to white noise analysis but using the randomness of Poisson input spikes instead of Gaussian white noise. We find that a relay cell with active T-channels requires at least a third-order system to achieve a characterization as good as a second-order system for a relay cell without T-channels.

    View details for DOI 10.1109/TBME.2009.2015579

    View details for Web of Science ID 000266990400017

    View details for PubMedID 19527951

  • A Delay-Insensitive Address-Event Link 15th IEEE International Symposium on Asynchronous Circuits and Systems Lin, J., Boahen, K. IEEE. 2009: 50–57
  • Neurotech for neuroscience: Unifying concepts, organizing principles, and emerging tools JOURNAL OF NEUROSCIENCE Silver, R., Boahen, K., Grillner, S., Kopell, N., Olsen, K. L. 2007; 27 (44): 11807-11819

    Abstract

    The ability to tackle analysis of the brain at multiple levels simultaneously is emerging from rapid methodological developments. The classical research strategies of "measure," "model," and "make" are being applied to the exploration of nervous system function. These include novel conceptual and theoretical approaches, creative use of mathematical modeling, and attempts to build brain-like devices and systems, as well as other developments including instrumentation and statistical modeling (not covered here). Increasingly, these efforts require teams of scientists from a variety of traditional scientific disciplines to work together. The potential of such efforts for understanding directed motor movement, emergence of cognitive function from neuronal activity, and development of neuromimetic computers are described by a team that includes individuals experienced in behavior and neuroscience, mathematics, and engineering. Funding agencies, including the National Science Foundation, explore the potential of these changing frontiers of research for developing research policies and long-term planning.

    View details for DOI 10.1523/JNEUROSCI.3575-07.2007

    View details for Web of Science ID 000250577600006

    View details for PubMedID 17978017

    View details for PubMedCentralID PMC3275424

  • Silicon neurons that burst when primed IEEE International Symposium on Circuits and Systems Hynna, K. M., Boahen, K. IEEE. 2007: 3363–3366
  • Silicon neurons that inhibit to synchronize IEEE International Symposium on Circuits and Systems Arthur, J. V., Boahen, K. IEEE. 2007: 1186–1186
  • Silicon neurons that inhibit to synchronize 2006 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOLS 1-11, PROCEEDINGS Arthur, J. V., Boahen, K. 2006: 4807-?