Bio


Frank Willett is co-director of the Neural Prosthetics Translational Laboratory. Our group develops brain-computer interfaces (BCIs) to restore movement and communication to people with neurological disorders. Recent contributions include handwriting and speech-based BCIs that set new records for communication speed and accuracy in people with paralysis. More broadly, we are interested in computational approaches to understanding brain function and recordings, with a focus on how the human brain represents movement and language.

Academic Appointments


Honors & Awards


  • Annual BCI Award - 2nd Place, BCI Award Foundation (2022)
  • Early Career Award, BCI Society (2021)
  • Annual BCI Award - 1st Place, BCI Award Foundation (2020)
  • Annual BCI Award - 1st Place, BCI Award Foundation (2018)
  • Graduate Research Fellowship Program Awardee, National Science Foundation (2013-2016)

2024-25 Courses


All Publications


  • A high-performance brain-computer interface for finger decoding and quadcopter game control in an individual with paralysis. Nature medicine Willsey, M. S., Shah, N. P., Avansino, D. T., Hahn, N. V., Jamiolkowski, R. M., Kamdar, F. B., Hochberg, L. R., Willett, F. R., Henderson, J. M. 2025

    Abstract

    People with paralysis express unmet needs for peer support, leisure activities and sporting activities. Many within the general population rely on social media and massively multiplayer video games to address these needs. We developed a high-performance, finger-based brain-computer-interface system allowing continuous control of three independent finger groups, of which the thumb can be controlled in two dimensions, yielding a total of four degrees of freedom. The system was tested in a human research participant with tetraplegia due to spinal cord injury over sequential trials requiring fingers to reach and hold on targets, with an average acquisition rate of 76 targets per minute and completion time of 1.58 ± 0.06 seconds-comparing favorably to prior animal studies despite a twofold increase in the decoded degrees of freedom. More importantly, finger positions were then used to control a virtual quadcopter-the number-one restorative priority for the participant-using a brain-to-finger-to-computer interface to allow dexterous navigation around fixed- and random-ringed obstacle courses. The participant expressed or demonstrated a sense of enablement, recreation and social connectedness that addresses many of the unmet needs of people with paralysis.

    View details for DOI 10.1038/s41591-024-03341-8

    View details for PubMedID 39833405

    View details for PubMedCentralID 5024361

  • An Accurate and Rapidly Calibrating Speech Neuroprosthesis. The New England journal of medicine Card, N. S., Wairagkar, M., Iacobacci, C., Hou, X., Singer-Clark, T., Willett, F. R., Kunz, E. M., Fan, C., Vahdati Nia, M., Deo, D. R., Srinivasan, A., Choi, E. Y., Glasser, M. F., Hochberg, L. R., Henderson, J. M., Shahlaie, K., Stavisky, S. D., Brandman, D. M. 2024; 391 (7): 609-618

    Abstract

    Brain-computer interfaces can enable communication for people with paralysis by transforming cortical activity associated with attempted speech into text on a computer screen. Communication with brain-computer interfaces has been restricted by extensive training requirements and limited accuracy.A 45-year-old man with amyotrophic lateral sclerosis (ALS) with tetraparesis and severe dysarthria underwent surgical implantation of four microelectrode arrays into his left ventral precentral gyrus 5 years after the onset of the illness; these arrays recorded neural activity from 256 intracortical electrodes. We report the results of decoding his cortical neural activity as he attempted to speak in both prompted and unstructured conversational contexts. Decoded words were displayed on a screen and then vocalized with the use of text-to-speech software designed to sound like his pre-ALS voice.On the first day of use (25 days after surgery), the neuroprosthesis achieved 99.6% accuracy with a 50-word vocabulary. Calibration of the neuroprosthesis required 30 minutes of cortical recordings while the participant attempted to speak, followed by subsequent processing. On the second day, after 1.4 additional hours of system training, the neuroprosthesis achieved 90.2% accuracy using a 125,000-word vocabulary. With further training data, the neuroprosthesis sustained 97.5% accuracy over a period of 8.4 months after surgical implantation, and the participant used it to communicate in self-paced conversations at a rate of approximately 32 words per minute for more than 248 cumulative hours.In a person with ALS and severe dysarthria, an intracortical speech neuroprosthesis reached a level of performance suitable to restore conversational communication after brief training. (Funded by the Office of the Assistant Secretary of Defense for Health Affairs and others; BrainGate2 ClinicalTrials.gov number, NCT00912041.).

    View details for DOI 10.1056/NEJMoa2314132

    View details for PubMedID 39141853

  • Brain control of bimanual movement enabled by recurrent neural networks. Scientific reports Deo, D. R., Willett, F. R., Avansino, D. T., Hochberg, L. R., Henderson, J. M., Shenoy, K. V. 2024; 14 (1): 1598

    Abstract

    Brain-computer interfaces have so far focused largely on enabling the control of a single effector, for example a single computer cursor or robotic arm. Restoring multi-effector motion could unlock greater functionality for people with paralysis (e.g., bimanual movement). However, it may prove challenging to decode the simultaneous motion of multiple effectors, as we recently found that a compositional neural code links movements across all limbs and that neural tuning changes nonlinearly during dual-effector motion. Here, we demonstrate the feasibility of high-quality bimanual control of two cursors via neural network (NN) decoders. Through simulations, we show that NNs leverage a neural 'laterality' dimension to distinguish between left and right-hand movements as neural tuning to both hands become increasingly correlated. In training recurrent neural networks (RNNs) for two-cursor control, we developed a method that alters the temporal structure of the training data by dilating/compressing it in time and re-ordering it, which we show helps RNNs successfully generalize to the online setting. With this method, we demonstrate that a person with paralysis can control two computer cursors simultaneously. Our results suggest that neural network decoders may be advantageous for multi-effector decoding, provided they are designed to transfer to the online setting.

    View details for DOI 10.1038/s41598-024-51617-3

    View details for PubMedID 38238386

    View details for PubMedCentralID PMC10796685

  • A high-performance speech neuroprosthesis. Nature Willett, F. R., Kunz, E. M., Fan, C., Avansino, D. T., Wilson, G. H., Choi, E. Y., Kamdar, F., Glasser, M. F., Hochberg, L. R., Druckmann, S., Shenoy, K. V., Henderson, J. M. 2023

    Abstract

    Speech brain-computer interfaces (BCIs) have the potential to restore rapid communication to people with paralysis by decoding neural activity evoked by attempted speech into text1,2 or sound3,4. Early demonstrations, although promising, have not yet achieved accuracies sufficiently high for communication of unconstrained sentences from a large vocabulary1-7. Here we demonstrate a speech-to-text BCI that records spiking activity from intracortical microelectrode arrays. Enabled by these high-resolution recordings, our study participant-who can no longer speak intelligibly owing to amyotrophic lateral sclerosis-achieved a 9.1% word error rate on a 50-word vocabulary (2.7 times fewer errors than the previous state-of-the-art speech BCI2) and a 23.8% word error rate on a 125,000-word vocabulary (the first successful demonstration, to our knowledge, of large-vocabulary decoding). Our participant's attempted speech was decoded  at 62 words per minute, which is 3.4 times as fast as the previous record8 and begins to approach the speed of natural conversation (160 words per minute9). Finally, we highlight two aspects of the neural code for speech that are encouraging for speech BCIs: spatially intermixed tuning to speech articulators that makes accurate decoding possible from only a small region of cortex, and a detailed articulatory representation of phonemes that persists years after paralysis. These results show a feasible path forward for restoring rapid communication to people with paralysis who can no longer speak.

    View details for DOI 10.1038/s41586-023-06377-x

    View details for PubMedID 37612500

    View details for PubMedCentralID 4464168

  • Plug-and-Play Stability for Intracortical Brain-Computer Interfaces: A One-Year Demonstration of Seamless Brain-to-Text Communication Fan, C., Hahn, N., Kamdar, F., Avansino, D., Wilson, G. H., Hochberg, L., Shenoy, K. V., Henderson, J. M., Willett, F. R., Oh, A., Neumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2023
  • Learned motor patterns are replayed in human motor cortex during sleep. The Journal of neuroscience : the official journal of the Society for Neuroscience Rubin, D. B., Hosman, T., Kelemen, J. N., Kapitonava, A., Willett, F. R., Coughlin, B. F., Halgren, E., Kimchi, E. Y., Williams, Z. M., Simeral, J. D., Hochberg, L. R., Cash, S. S. 2022

    Abstract

    Consolidation of memory is believed to involve offline replay of neural activity. While amply demonstrated in rodents, evidence for replay in humans, particularly regarding motor memory, is less compelling. To determine whether replay occurs after motor learning, we sought to record from motor cortex during a novel motor task and subsequent overnight sleep. A 36-year-old man with tetraplegia secondary to cervical spinal cord injury enrolled in the ongoing BrainGate brain-computer interface pilot clinical trial had two 96-channel intracortical microelectrode arrays placed chronically into left pre-central gyrus (PCG). Single- and multi-unit activity was recorded while he played a color/sound sequence matching memory game. Intended movements were decoded from motor cortical neuronal activity by a real-time steady-state Kalman filter that allowed the participant to control a neurally driven cursor on the screen. Intracortical neural activity from PCG and 2-lead scalp EEG were recorded overnight as he slept. When decoded using the same steady-state Kalman filter parameters, intracortical neural signals recorded overnight replayed the target sequence from the memory game at intervals throughout at a frequency significantly greater than expected by chance. Replay events occurred at speeds ranging from one to four times as fast as initial task execution and were most frequently observed during slow-wave sleep. These results demonstrate that recent visuomotor skill acquisition in humans may be accompanied by replay of the corresponding motor cortex neural activity during sleep.Significance Statement:Within cortex, the acquisition of information is often followed by the offline recapitulation of specific sequences of neural firing. Replay of recent activity is enriched during sleep and may support the consolidation of learning and memory. Using an intracortical brain computer interface (iBCI), we recorded and decoded activity from motor cortex as a human research participant performed a novel motor task. By decoding neural activity throughout subsequent sleep, we find that neural sequences underlying the recently practiced motor task are repeated throughout the night, providing direct evidence of replay in human motor cortex during sleep. This approach, using an optimized BCI decoder to characterize neural activity during sleep, provides a framework for future studies exploring replay, learning, and memory.

    View details for DOI 10.1523/JNEUROSCI.2074-21.2022

    View details for PubMedID 35589391

  • High-performance brain-to-text communication via handwriting. Nature Willett, F. R., Avansino, D. T., Hochberg, L. R., Henderson, J. M., Shenoy, K. V. 2021; 593 (7858): 249–54

    Abstract

    Brain-computer interfaces (BCIs) can restore communication to people who have lost the ability to move or speak. So far, a major focus of BCI research has been on restoring gross motor skills, such as reaching and grasping1-5 or point-and-click typing with a computer cursor6,7. However, rapid sequences of highly dexterous behaviours, such as handwriting or touch typing, might enable faster rates of communication. Here we developed an intracortical BCI that decodes attempted handwriting movements from neural activity in the motor cortex and translates it to text in real time, using a recurrent neural network decoding approach. With this BCI, our study participant, whose hand was paralysed from spinal cord injury, achieved typing speeds of 90characters per minute with 94.1% raw accuracy online, and greater than 99% accuracy offline with a general-purpose autocorrect. To our knowledge, these typing speeds exceed those reported for any other BCI, and are comparable to typical smartphone typing speeds of individuals in the age group of our participant (115characters per minute)8. Finally, theoretical considerations explain why temporally complex movements, such as handwriting, may be fundamentally easier to decode than point-to-point movements. Our results open a new approach for BCIs and demonstrate the feasibility of accurately decoding rapid, dexterous movements years after paralysis.

    View details for DOI 10.1038/s41586-021-03506-2

    View details for PubMedID 33981047

  • Decoding spoken English from intracortical electrode arrays in dorsal precentral gyrus. Journal of neural engineering Wilson, G. H., Stavisky, S. D., Willett, F. R., Avansino, D. T., Kelemen, J. N., Hochberg, L. R., Henderson, J. M., Druckmann, S., Shenoy, K. V. 2020; 17 (6): 066007

    Abstract

    OBJECTIVE: To evaluate the potential of intracortical electrode array signals for brain-computer interfaces (BCIs) to restore lost speech, we measured the performance of decoders trained to discriminate a comprehensive basis set of 39 English phonemes and to synthesize speech sounds via a neural pattern matching method. We decoded neural correlates of spoken-out-loud words in the 'hand knob' area of precentral gyrus, a step toward the eventual goal of decoding attempted speech from ventral speech areas in patients who are unable to speak.APPROACH: Neural and audio data were recorded while two BrainGate2 pilot clinical trial participants, each with two chronically-implanted 96-electrode arrays, spoke 420 different words that broadly sampled English phonemes. Phoneme onsets were identified from audio recordings, and their identities were then classified from neural features consisting of each electrode's binned action potential counts or high-frequency local field potential power. Speech synthesis was performed using the 'Brain-to-Speech' pattern matching method. We also examined two potential confounds specific to decoding overt speech: acoustic contamination of neural signals and systematic differences in labeling different phonemes' onset times.MAIN RESULTS: A linear decoder achieved up to 29.3% classification accuracy (chance = 6%) across 39 phonemes, while an RNN classifier achieved 33.9% accuracy. Parameter sweeps indicated that performance did not saturate when adding more electrodes or more training data, and that accuracy improved when utilizing time-varying structure in the data. Microphonic contamination and phoneme onset differences modestly increased decoding accuracy, but could be mitigated by acoustic artifact subtraction and using a neural speech onset marker, respectively. Speech synthesis achieved r = 0.523 correlation between true and reconstructed audio.SIGNIFICANCE: The ability to decode speech using intracortical electrode array signals from a nontraditional speech area suggests that placing electrode arrays in ventral speech areas is a promising direction for speech BCIs.

    View details for DOI 10.1088/1741-2552/abbfef

    View details for PubMedID 33236720

  • Hand Knob Area of Premotor Cortex Represents the Whole Body in a Compositional Way. Cell Willett, F. R., Deo, D. R., Avansino, D. T., Rezaii, P., Hochberg, L. R., Henderson, J. M., Shenoy, K. V. 2020

    Abstract

    Decades after the motor homunculus was first proposed, it is still unknown how different body parts are intermixed and interrelated in human motor cortical areas at single-neuron resolution. Using multi-unit recordings, we studied how face, head, arm, and leg movements are represented in the hand knob area of premotor cortex (precentral gyrus) in people with tetraplegia. Contrary to traditional expectations, we found strong representation of all movements and a partially "compositional" neural code that linked together all four limbs. The code consisted of (1) a limb-coding component representing the limb to be moved and (2) a movement-coding component where analogous movements from each limb (e.g., hand grasp and toe curl) were represented similarly. Compositional coding might facilitate skill transfer across limbs, and it provides a useful framework for thinking about how the motor system constructsmovement. Finally, we leveraged these results to create a whole-body intracortical brain-computer interface that spreads targets across all limbs.

    View details for DOI 10.1016/j.cell.2020.02.043

    View details for PubMedID 32220308

  • Speech-related dorsal motor cortex activity does not interfere with iBCI cursor control. Journal of neural engineering Stavisky, S. D., Willett, F. R., Avansino, D. T., Hochberg, L. R., Shenoy, K. V., Henderson, J. M. 2020; 17 (1): 016049

    Abstract

    OBJECTIVE: Speech-related neural modulation was recently reported in 'arm/hand' area of human dorsal motor cortex that is used as a signal source for intracortical brain-computer interfaces (iBCIs). This raises the concern that speech-related modulation might deleteriously affect the decoding of arm movement intentions, for instance by affecting velocity command outputs. This study sought to clarify whether or not speaking would interfere with ongoing iBCI use.APPROACH: A participant in the BrainGate2 iBCI clinical trial used an iBCI to control a computer cursor; spoke short words in a stand-alone speech task; and spoke short words during ongoing iBCI use. We examined neural activity in all three behaviors and compared iBCI performance with and without concurrent speech.MAIN RESULTS: Dorsal motor cortex firing rates modulated strongly during stand-alone speech, but this activity was largely attenuated when speaking occurred during iBCI cursor control using attempted arm movements. 'Decoder-potent' projections of the attenuated speech-related neural activity were small, explaining why cursor task performance was similar between iBCI use with and without concurrent speaking.SIGNIFICANCE: These findings indicate that speaking does not directly interfere with iBCIs that decode attempted arm movements. This suggests that patients who are able to speak will be able to use motor cortical-driven computer interfaces or prostheses without needing to forgo speaking while using these devices.

    View details for DOI 10.1088/1741-2552/ab5b72

    View details for PubMedID 32023225

  • Neural Representation of Observed, Imagined, and Attempted Grasping Force in Motor Cortex of Individuals with Chronic Tetraplegia. Scientific reports Rastogi, A. n., Vargas-Irwin, C. E., Willett, F. R., Abreu, J. n., Crowder, D. C., Murphy, B. A., Memberg, W. D., Miller, J. P., Sweet, J. A., Walter, B. L., Cash, S. S., Rezaii, P. G., Franco, B. n., Saab, J. n., Stavisky, S. D., Shenoy, K. V., Henderson, J. M., Hochberg, L. R., Kirsch, R. F., Ajiboye, A. B. 2020; 10 (1): 1429

    Abstract

    Hybrid kinetic and kinematic intracortical brain-computer interfaces (iBCIs) have the potential to restore functional grasping and object interaction capabilities in individuals with tetraplegia. This requires an understanding of how kinetic information is represented in neural activity, and how this representation is affected by non-motor parameters such as volitional state (VoS), namely, whether one observes, imagines, or attempts an action. To this end, this work investigates how motor cortical neural activity changes when three human participants with tetraplegia observe, imagine, and attempt to produce three discrete hand grasping forces with the dominant hand. We show that force representation follows the same VoS-related trends as previously shown for directional arm movements; namely, that attempted force production recruits more neural activity compared to observed or imagined force production. Additionally, VoS-modulated neural activity to a greater extent than grasping force. Neural representation of forces was lower than expected, possibly due to compromised somatosensory pathways in individuals with tetraplegia, which have been shown to influence motor cortical activity. Nevertheless, attempted forces (but not always observed or imagined forces) could be decoded significantly above chance, thereby potentially providing relevant information towards the development of a hybrid kinetic and kinematic iBCI.

    View details for DOI 10.1038/s41598-020-58097-1

    View details for PubMedID 31996696

  • Neural ensemble dynamics in dorsal motor cortex during speech in people with paralysis. eLife Stavisky, S. D., Willett, F. R., Wilson, G. H., Murphy, B. A., Rezaii, P., Avansino, D. T., Memberg, W. D., Miller, J. P., Kirsch, R. F., Hochberg, L. R., Ajiboye, A. B., Druckmann, S., Shenoy, K. V., Henderson, J. M. 2019; 8

    Abstract

    Speaking is a sensorimotor behavior whose neural basis is difficult to study with single neuron resolution due to the scarcity of human intracortical measurements. We used electrode arrays to record from the motor cortex 'hand knob' in two people with tetraplegia, an area not previously implicated in speech. Neurons modulated during speaking and during non-speaking movements of the tongue, lips, and jaw. This challenges whether the conventional model of a 'motor homunculus' division by major body regions extends to the single-neuron scale. Spoken words and syllables could be decoded from single trials, demonstrating the potential of intracortical recordings for brain-computer interfaces to restore speech. Two neural population dynamics features previously reported for arm movements were also present during speaking: a component that was mostly invariant across initiating different words, followed by rotatory dynamics during speaking. This suggests that common neural dynamical motifs may underlie movement of arm and speech articulators.

    View details for DOI 10.7554/eLife.46015

    View details for PubMedID 31820736

  • Principled BCI Decoder Design and Parameter Selection Using a Feedback Control Model. Scientific reports Willett, F. R., Young, D. R., Murphy, B. A., Memberg, W. D., Blabe, C. H., Pandarinath, C. n., Stavisky, S. D., Rezaii, P. n., Saab, J. n., Walter, B. L., Sweet, J. A., Miller, J. P., Henderson, J. M., Shenoy, K. V., Simeral, J. D., Jarosiewicz, B. n., Hochberg, L. R., Kirsch, R. F., Bolu Ajiboye, A. n. 2019; 9 (1): 8881

    Abstract

    Decoders optimized offline to reconstruct intended movements from neural recordings sometimes fail to achieve optimal performance online when they are used in closed-loop as part of an intracortical brain-computer interface (iBCI). This is because typical decoder calibration routines do not model the emergent interactions between the decoder, the user, and the task parameters (e.g. target size). Here, we investigated the feasibility of simulating online performance to better guide decoder parameter selection and design. Three participants in the BrainGate2 pilot clinical trial controlled a computer cursor using a linear velocity decoder under different gain (speed scaling) and temporal smoothing parameters and acquired targets with different radii and distances. We show that a user-specific iBCI feedback control model can predict how performance changes under these different decoder and task parameters in held-out data. We also used the model to optimize a nonlinear speed scaling function for the decoder. When used online with two participants, it increased the dynamic range of decoded speeds and decreased the time taken to acquire targets (compared to an optimized standard decoder). These results suggest that it is feasible to simulate iBCI performance accurately enough to be useful for quantitative decoder optimization and design.

    View details for DOI 10.1038/s41598-019-44166-7

    View details for PubMedID 31222030

  • Closed-loop cortical control of virtual reach and posture using cartesian and joint velocity commands. Journal of neural engineering Young, D., Willett, F., Memberg, W. D., Murphy, B. A., Rezaii, P., Walter, B., Sweet, J. A., Miller, J., Shenoy, K. V., Hochberg, L., Kirsch, R. F., Ajiboye, A. B. 2018

    Abstract

    OBJECTIVE: Brain-computer interfaces (BCIs) are a promising technology for the restoration of function to people with paralysis, especially for controlling coordinated reaching. Typical BCI studies decode Cartesian endpoint velocities as commands, but human arm movements might be better controlled in a joint-based coordinate frame, which may match underlying movement encoding in the motor cortex. A better understanding of BCI controlled reaching by people with paralysis may lead to performance improvements in brain-controlled assistive devices. Approach. Two intracortical BCI participants in the BrainGate2 pilot clinical trial performed a 3D endpoint virtual reaching task using two decoders: Cartesian and joint velocity. Task performance metrics (i.e. success rate and path efficiency) and single feature and population tuning were compared across the two decoder conditions. The participants also demonstrated the first BCI control of a fourth dimension of reaching, the arm's swivel angle, in a 4D posture matching task. Main Results. Both users achieved significantly higher success rates using Cartesian control, and joint controlled trajectories were more variable and significantly more curved. Neural tuning analyses showed that most single feature activity was best described by a Cartesian kinematic encoding model, and population analyses revealed only slight differences in aggregate activity between the decoder conditions. Simulations of a BCI user reproduced trajectory features seen during closed-loop joint control when assuming only Cartesian-tuned features passed through a joint decoder. With minimal training, both participants controlled the virtual arm's swivel angle to complete a 4D posture matching task, and achieved significantly higher success using a Cartesian+swivel decoder compared to a joint velocity decoder. Significance. These results suggest that Cartesian command interfaces may provide better BCI control of arm movements than other kinematic variables, even in 4D posture tasks with swivel angle targets.

    View details for DOI 10.1088/1741-2552/aaf606

    View details for PubMedID 30523839

  • A Comparison of Intention Estimation Methods for Decoder Calibration in Intracortical Brain-Computer Interfaces IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING Willett, F. R., Murphy, B. A., Young, D., Memberg, W. D., Blabe, C. H., Pandarinath, C., Franco, B., Saab, J., Walter, B. L., Sweet, J. A., Miller, J. P., Henderson, J. M., Shenoy, K. V., Simeral, J. D., Jarosiewicz, B., Hochberg, L. R., Kirsch, R. F., Ajiboye, A. 2018; 65 (9): 2066–78

    Abstract

    Recent reports indicate that making better assumptions about the user's intended movement can improve the accuracy of decoder calibration for intracortical brain-computer interfaces. Several methods now exist for estimating user intent, including an optimal feedback control model, a piecewise-linear feedback control model, ReFIT, and other heuristics. Which of these methods yields the best decoding performance?Using data from the BrainGate2 pilot clinical trial, we measured how a steady-state velocity Kalman filter decoder was affected by the choice of intention estimation method. We examined three separate components of the Kalman filter: dimensionality reduction, temporal smoothing, and output gain (speed scaling).The decoder's dimensionality reduction properties were largely unaffected by the intention estimation method. Decoded velocity vectors differed by <5% in terms of angular error and speed vs. target distance curves across methods. In contrast, the smoothing and gain properties of the decoder were greatly affected (> 50% difference in average values). Since the optimal gain and smoothing properties are task-specific (e.g. lower gains are better for smaller targets but worse for larger targets), no one method was better for all tasks.Our results show that, when gain and smoothing differences are accounted for, current intention estimation methods yield nearly equivalent decoders and that simple models of user intent, such as a position error vector (target position minus cursor position), perform comparably to more elaborate models. Our results also highlight that simple differences in gain and smoothing properties have a large effect on online performance and can confound decoder comparisons.

    View details for DOI 10.1109/TBME.2017.2783358

    View details for Web of Science ID 000442349500017

    View details for PubMedID 29989927

    View details for PubMedCentralID PMC6043406

  • Signal processing methods for reducing artifacts in microelectrode brain recordings caused by functional electrical stimulation JOURNAL OF NEURAL ENGINEERING Young, D., Willett, F., Memberg, W. D., Murphy, B., Walter, B., Sweet, J., Miller, J., Hochberg, L. R., Kirsch, R. F., Ajiboye, A. B. 2018; 15 (2): 026014

    Abstract

    Functional electrical stimulation (FES) is a promising technology for restoring movement to paralyzed limbs. Intracortical brain-computer interfaces (iBCIs) have enabled intuitive control over virtual and robotic movements, and more recently over upper extremity FES neuroprostheses. However, electrical stimulation of muscles creates artifacts in intracortical microelectrode recordings that could degrade iBCI performance. Here, we investigate methods for reducing the cortically recorded artifacts that result from peripheral electrical stimulation.One participant in the BrainGate2 pilot clinical trial had two intracortical microelectrode arrays placed in the motor cortex, and thirty-six stimulating intramuscular electrodes placed in the muscles of the contralateral limb. We characterized intracortically recorded electrical artifacts during both intramuscular and surface stimulation. We compared the performance of three artifact reduction methods: blanking, common average reference (CAR) and linear regression reference (LRR), which creates channel-specific reference signals, composed of weighted sums of other channels.Electrical artifacts resulting from surface stimulation were 175  ×  larger than baseline neural recordings (which were 110 µV peak-to-peak), while intramuscular stimulation artifacts were only 4  ×  larger. The artifact waveforms were highly consistent across electrodes within each array. Application of LRR reduced artifact magnitudes to less than 10 µV and largely preserved the original neural feature values used for decoding. Unmitigated stimulation artifacts decreased iBCI decoding performance, but performance was almost completely recovered using LRR, which outperformed CAR and blanking and extracted useful neural information during stimulation artifact periods.The LRR method was effective at reducing electrical artifacts resulting from both intramuscular and surface FES, and almost completely restored iBCI decoding performance (>90% recovery for surface stimulation and full recovery for intramuscular stimulation). The results demonstrate that FES-induced artifacts can be easily mitigated in FES  +  iBCI systems by using LRR for artifact reduction, and suggest that the LRR method may also be useful in other noise reduction applications.

    View details for DOI 10.1088/1741-2552/aa9ee8

    View details for Web of Science ID 000423398600006

    View details for PubMedID 29199642

    View details for PubMedCentralID PMC5818316

  • Rapid calibration of an intracortical brain-computer interface for people with tetraplegia. Journal of neural engineering Brandman, D. M., Hosman, T. n., Saab, J. n., Burkhart, M. C., Shanahan, B. E., Ciancibello, J. G., Sarma, A. A., Milstein, D. J., Vargas-Irwin, C. E., Franco, B. n., Kelemen, J. n., Blabe, C. n., Murphy, B. A., Young, D. R., Willett, F. R., Pandarinath, C. n., Stavisky, S. D., Kirsch, R. F., Walter, B. L., Bolu Ajiboye, A. n., Cash, S. S., Eskandar, E. N., Miller, J. P., Sweet, J. A., Shenoy, K. V., Henderson, J. M., Jarosiewicz, B. n., Harrison, M. T., Simeral, J. D., Hochberg, L. R. 2018; 15 (2): 026007

    Abstract

    Brain-computer interfaces (BCIs) can enable individuals with tetraplegia to communicate and control external devices. Though much progress has been made in improving the speed and robustness of neural control provided by intracortical BCIs, little research has been devoted to minimizing the amount of time spent on decoder calibration.We investigated the amount of time users needed to calibrate decoders and achieve performance saturation using two markedly different decoding algorithms: the steady-state Kalman filter, and a novel technique using Gaussian process regression (GP-DKF).Three people with tetraplegia gained rapid closed-loop neural cursor control and peak, plateaued decoder performance within 3 min of initializing calibration. We also show that a BCI-naïve user (T5) was able to rapidly attain closed-loop neural cursor control with the GP-DKF using self-selected movement imagery on his first-ever day of closed-loop BCI use, acquiring a target 37 s after initiating calibration.These results demonstrate the potential for an intracortical BCI to be used immediately after deployment by people with paralysis, without the need for user learning or extensive system calibration.

    View details for DOI 10.1088/1741-2552/aa9ee7

    View details for PubMedID 29363625

  • Decoding Speech from Intracortical Multielectrode Arrays in Dorsal "Arm/Hand Areas" of Human Motor Cortex Stavisky, S. D., Rezaii, P., Willett, F. R., Hochberg, L. R., Shenoy, K., Henderson, J. M., IEEE IEEE. 2018: 93–97

    Abstract

    Neural prostheses are being developed to restore speech to people with neurological injury or disease. A key design consideration is where and how to access neural correlates of intended speech. Most prior work has examined cortical field potentials at a coarse resolution using electroencephalography (EEG) or medium resolution using electrocorticography (ECoG). The few studies of speech with single-neuron resolution recorded from ventral areas known to be part of the speech network. Here, we recorded from two 96- electrode arrays chronically implanted into the 'hand knob' area of motor cortex while a person with tetraplegia spoke. Despite being located in an area previously demonstrated to modulate during attempted arm movements, many electrodes' neuronal firing rates responded to speech production. In offline analyses, we could classify which of 9 phonemes (plus silence) was spoken with 81% single-trial accuracy using a combination of spike rate and local field potential (LFP) power. This suggests that high-fidelity speech prostheses may be possible using large-scale intracortical recordings in motor cortical areas involved in controlling speech articulators.

    View details for Web of Science ID 000596231900022

    View details for PubMedID 30440349

  • Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration. Lancet (London, England) Ajiboye, A. B., Willett, F. R., Young, D. R., Memberg, W. D., Murphy, B. A., Miller, J. P., Walter, B. L., Sweet, J. A., Hoyen, H. A., Keith, M. W., Peckham, P. H., Simeral, J. D., Donoghue, J. P., Hochberg, L. R., Kirsch, R. F. 2017; 389 (10081): 1821-1830

    Abstract

    People with chronic tetraplegia, due to high-cervical spinal cord injury, can regain limb movements through coordinated electrical stimulation of peripheral muscles and nerves, known as functional electrical stimulation (FES). Users typically command FES systems through other preserved, but unrelated and limited in number, volitional movements (eg, facial muscle activity, head movements, shoulder shrugs). We report the findings of an individual with traumatic high-cervical spinal cord injury who coordinated reaching and grasping movements using his own paralysed arm and hand, reanimated through implanted FES, and commanded using his own cortical signals through an intracortical brain-computer interface (iBCI).We recruited a participant into the BrainGate2 clinical trial, an ongoing study that obtains safety information regarding an intracortical neural interface device, and investigates the feasibility of people with tetraplegia controlling assistive devices using their cortical signals. Surgical procedures were performed at University Hospitals Cleveland Medical Center (Cleveland, OH, USA). Study procedures and data analyses were performed at Case Western Reserve University (Cleveland, OH, USA) and the US Department of Veterans Affairs, Louis Stokes Cleveland Veterans Affairs Medical Center (Cleveland, OH, USA). The study participant was a 53-year-old man with a spinal cord injury (cervical level 4, American Spinal Injury Association Impairment Scale category A). He received two intracortical microelectrode arrays in the hand area of his motor cortex, and 4 months and 9 months later received a total of 36 implanted percutaneous electrodes in his right upper and lower arm to electrically stimulate his hand, elbow, and shoulder muscles. The participant used a motorised mobile arm support for gravitational assistance and to provide humeral abduction and adduction under cortical control. We assessed the participant's ability to cortically command his paralysed arm to perform simple single-joint arm and hand movements and functionally meaningful multi-joint movements. We compared iBCI control of his paralysed arm with that of a virtual three-dimensional arm. This study is registered with ClinicalTrials.gov, number NCT00912041.The intracortical implant occurred on Dec 1, 2014, and we are continuing to study the participant. The last session included in this report was Nov 7, 2016. The point-to-point target acquisition sessions began on Oct 8, 2015 (311 days after implant). The participant successfully cortically commanded single-joint and coordinated multi-joint arm movements for point-to-point target acquisitions (80-100% accuracy), using first a virtual arm and second his own arm animated by FES. Using his paralysed arm, the participant volitionally performed self-paced reaches to drink a mug of coffee (successfully completing 11 of 12 attempts within a single session 463 days after implant) and feed himself (717 days after implant).To our knowledge, this is the first report of a combined implanted FES+iBCI neuroprosthesis for restoring both reaching and grasping movements to people with chronic tetraplegia due to spinal cord injury, and represents a major advance, with a clear translational path, for clinically viable neuroprostheses for restoration of reaching and grasping after paralysis.National Institutes of Health, Department of Veterans Affairs.

    View details for DOI 10.1016/S0140-6736(17)30601-3

    View details for PubMedID 28363483

    View details for PubMedCentralID PMC5516547

  • High performance communication by people with paralysis using an intracortical brain-computer interface. eLife Pandarinath, C., Nuyujukian, P., Blabe, C. H., Sorice, B. L., Saab, J., Willett, F. R., Hochberg, L. R., Shenoy, K. V., Henderson, J. M. 2017; 6

    Abstract

    Brain-computer interfaces (BCIs) have the potential to restore communication for people with tetraplegia and anarthria by translating neural activity into control signals for assistive communication devices. While previous pre-clinical and clinical studies have demonstrated promising proofs-of-concept (Serruya et al., 2002; Simeral et al., 2011; Bacher et al., 2015; Nuyujukian et al., 2015; Aflalo et al., 2015; Gilja et al., 2015; Jarosiewicz et al., 2015; Wolpaw et al., 1998; Hwang et al., 2012; Spüler et al., 2012; Leuthardt et al., 2004; Taylor et al., 2002; Schalk et al., 2008; Moran, 2010; Brunner et al., 2011; Wang et al., 2013; Townsend and Platsko, 2016; Vansteensel et al., 2016; Nuyujukian et al., 2016; Carmena et al., 2003; Musallam et al., 2004; Santhanam et al., 2006; Hochberg et al., 2006; Ganguly et al., 2011; O'Doherty et al., 2011; Gilja et al., 2012), the performance of human clinical BCI systems is not yet high enough to support widespread adoption by people with physical limitations of speech. Here we report a high-performance intracortical BCI (iBCI) for communication, which was tested by three clinical trial participants with paralysis. The system leveraged advances in decoder design developed in prior pre-clinical and clinical studies (Gilja et al., 2015; Kao et al., 2016; Gilja et al., 2012). For all three participants, performance exceeded previous iBCIs (Bacher et al., 2015; Jarosiewicz et al., 2015) as measured by typing rate (by a factor of 1.4-4.2) and information throughput (by a factor of 2.2-4.0). This high level of performance demonstrates the potential utility of iBCIs as powerful assistive communication devices for people with limited motor function.Clinical Trial No: NCT00912041.

    View details for DOI 10.7554/eLife.18554

    View details for PubMedID 28220753

  • Feedback control policies employed by people using intracortical brain-computer interfaces JOURNAL OF NEURAL ENGINEERING Willett, F. R., Pandarinath, C., Jarosiewicz, B., Murphy, B. A., Memberg, W. D., Blabe, C. H., Saab, J., Walter, B. L., Sweet, J. A., Miller, J. P., Henderson, J. M., Shenoy, K. V., Simeral, J. D., Hochberg, L. R., Kirsch, R. F., Ajiboye, A. B. 2017; 14 (1)

    Abstract

    When using an intracortical BCI (iBCI), users modulate their neural population activity to move an effector towards a target, stop accurately, and correct for movement errors. We call the rules that govern this modulation a 'feedback control policy'. A better understanding of these policies may inform the design of higher-performing neural decoders.We studied how three participants in the BrainGate2 pilot clinical trial used an iBCI to control a cursor in a 2D target acquisition task. Participants used a velocity decoder with exponential smoothing dynamics. Through offline analyses, we characterized the users' feedback control policies by modeling their neural activity as a function of cursor state and target position. We also tested whether users could adapt their policy to different decoder dynamics by varying the gain (speed scaling) and temporal smoothing parameters of the iBCI.We demonstrate that control policy assumptions made in previous studies do not fully describe the policies of our participants. To account for these discrepancies, we propose a new model that captures (1) how the user's neural population activity gradually declines as the cursor approaches the target from afar, then decreases more sharply as the cursor comes into contact with the target, (2) how the user makes constant feedback corrections even when the cursor is on top of the target, and (3) how the user actively accounts for the cursor's current velocity to avoid overshooting the target. Further, we show that users can adapt their control policy to decoder dynamics by attenuating neural modulation when the cursor gain is high and by damping the cursor velocity more strongly when the smoothing dynamics are high.Our control policy model may help to build better decoders, understand how neural activity varies during active iBCI control, and produce better simulations of closed-loop iBCI movements.

    View details for DOI 10.1088/1741-2560/14/1/016001

    View details for Web of Science ID 000390362600001

    View details for PubMedID 27900953

    View details for PubMedCentralID PMC5239755

  • Signal-independent noise in intracortical brain-computer interfaces causes movement time properties inconsistent with Fitts' law. Journal of neural engineering Willett, F. R., Murphy, B. A., Memberg, W. D., Blabe, C. H., Pandarinath, C. n., Walter, B. L., Sweet, J. A., Miller, J. P., Henderson, J. M., Shenoy, K. V., Hochberg, L. R., Kirsch, R. F., Ajiboye, A. B. 2017; 14 (2): 026010

    Abstract

    Do movements made with an intracortical BCI (iBCI) have the same movement time properties as able-bodied movements? Able-bodied movement times typically obey Fitts' law: [Formula: see text] (where MT is movement time, D is target distance, R is target radius, and [Formula: see text] are parameters). Fitts' law expresses two properties of natural movement that would be ideal for iBCIs to restore: (1) that movement times are insensitive to the absolute scale of the task (since movement time depends only on the ratio [Formula: see text]) and (2) that movements have a large dynamic range of accuracy (since movement time is logarithmically proportional to [Formula: see text]).Two participants in the BrainGate2 pilot clinical trial made cortically controlled cursor movements with a linear velocity decoder and acquired targets by dwelling on them. We investigated whether the movement times were well described by Fitts' law.We found that movement times were better described by the equation [Formula: see text], which captures how movement time increases sharply as the target radius becomes smaller, independently of distance. In contrast to able-bodied movements, the iBCI movements we studied had a low dynamic range of accuracy (absence of logarithmic proportionality) and were sensitive to the absolute scale of the task (small targets had long movement times regardless of the [Formula: see text] ratio). We argue that this relationship emerges due to noise in the decoder output whose magnitude is largely independent of the user's motor command (signal-independent noise). Signal-independent noise creates a baseline level of variability that cannot be decreased by trying to move slowly or hold still, making targets below a certain size very hard to acquire with a standard decoder.The results give new insight into how iBCI movements currently differ from able-bodied movements and suggest that restoring a Fitts' law-like relationship to iBCI movements may require non-linear decoding strategies.

    View details for DOI 10.1088/1741-2552/aa5990

    View details for PubMedID 28177925

  • Differences in motor cortical representations of kinematic variables between action observation and action execution and implications for brain-machine interfaces. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference Willett, F. R., Suminski, A. J., Fagg, A. H., Hatsopoulos, N. G. 2014; 2014: 1334-7

    Abstract

    Observing an action being performed and executing the same action cause similar patterns of neural activity to emerge in the primary motor cortex (MI). Previous work has shown that the neural activity evoked during action observation (AO) is informative as to both the kinematics and muscle activation patterns of the action being performed, although the neural activity recorded during action observation contains less information than the activity recorded during action execution (AE). In this study, we extend these results by comparing the representation of different kinematic variables in MI single /multi unit activity between AO and AE conditions in three rhesus macaques. We show that the representation of acceleration decreases more significantly than that of position and velocity in AO (population decoding performance for acceleration decreases more steeply, and fewer neurons in AO encode acceleration significantly as compared to AE). We discuss the relevance of these results to brain-machine interfaces that make use of neural activity during AO to initialize a mapping function between neural activity and motor commands.

    View details for DOI 10.1109/EMBC.2014.6943845

    View details for PubMedID 25570214

  • Relationship between microelectrode array impedance and chronic recording quality of single units and local field potentials. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference Jiang, J., Willett, F. R., Taylor, D. M. 2014; 2014: 3045-8

    Abstract

    Practical application of intracortical microelectrode technology is currently hindered by the inability to reliably record neuronal signals chronically. The precise mechanism of device failure is still under debate, but most likely includes some combination of tissue reaction, mechanical failure, and chronic material degradation. Impedance is a measure of the ease with which current flows through a working electrode under a driving voltage. Impedance has been hypothesized to provide information about an electrode's surrounding tissue reaction as well as chronic insulation degradation. In this study, we investigated the relationship between an electrode's impedance and its chronic recording performance as measured by the number of isolatable single units and the quality of local field potential recordings. Two 64-channel electrode arrays implanted in separate monkeys were assessed. We found no simple relationship between impedance and recording quality that held for both animals across all time periods. This suggests that future investigations on the topic should adopt a more fine-grained within-day and within-animal analysis. We also found new evidence from local field potential spatial correlation supporting the theory that insulation degradation is an important contributor to electrode failure.

    View details for DOI 10.1109/EMBC.2014.6944265

    View details for PubMedID 25570633

  • Online adaptive decoding of intended movements with a hybrid kinetic and kinematic brain machine interface. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference Suminski, A. J., Fagg, A. H., Willett, F. R., Bodenhamer, M., Hatsopoulos, N. G. 2013; 2013: 1583-6

    Abstract

    Traditional brain machine interfaces for control of a prosthesis have typically focused on the kinematics of movement, rather than the dynamics. BMI decoders that extract the forces and/or torques to be applied by a prosthesis have the potential for giving the patient a much richer level of control across different dynamic scenarios or even scenarios in which the dynamics of the limb/environment are changing. However, it is a challenge to train a decoder that is able to capture this richness given the small amount of calibration data that is usually feasible to collect a priori. In this work, we propose that kinetic decoders should be continuously calibrated based on how they are used by the subject. Both intended hand position and joint torques are decoded simultaneously as a monkey performs a random target pursuit task. The deviation between intended and actual hand position is used as an estimate of error in the recently decoded joint torques. In turn, these errors are used to drive a gradient descent algorithm for improving the torque decoder parameters. We show that this approach is able to quickly restore the functionality of a torque decoder following substantial corruption with Gaussian noise.

    View details for DOI 10.1109/EMBC.2013.6609817

    View details for PubMedID 24110004

  • Improving brain-machine interface performance by decoding intended future movements. Journal of neural engineering Willett, F. R., Suminski, A. J., Fagg, A. H., Hatsopoulos, N. G. 2013; 10 (2): 026011

    Abstract

    A brain-machine interface (BMI) records neural signals in real time from a subject's brain, interprets them as motor commands, and reroutes them to a device such as a robotic arm, so as to restore lost motor function. Our objective here is to improve BMI performance by minimizing the deleterious effects of delay in the BMI control loop. We mitigate the effects of delay by decoding the subject's intended movements a short time lead in the future.We use the decoded, intended future movements of the subject as the control signal that drives the movement of our BMI. This should allow the user's intended trajectory to be implemented more quickly by the BMI, reducing the amount of delay in the system. In our experiment, a monkey (Macaca mulatta) uses a future prediction BMI to control a simulated arm to hit targets on a screen.Results from experiments with BMIs possessing different system delays (100, 200 and 300 ms) show that the monkey can make significantly straighter, faster and smoother movements when the decoder predicts the user's future intent. We also characterize how BMI performance changes as a function of delay, and explore offline how the accuracy of future prediction decoders varies at different time leads.This study is the first to characterize the effects of control delays in a BMI and to show that decoding the user's future intent can compensate for the negative effect of control delay on BMI performance.

    View details for DOI 10.1088/1741-2560/10/2/026011

    View details for PubMedID 23428966

    View details for PubMedCentralID PMC4019387

  • Compensating for delays in brain-machine interfaces by decoding intended future movement. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference Willett, F. R., Suminski, A. J., Fagg, A. H., Hatsopoulos, N. G. 2012; 2012: 4087-90

    Abstract

    Typically, brain-machine interfaces that enable the control of a prosthetic arm work by decoding a subjects' intended hand position or velocity and using a controller to move the arm accordingly. Researchers taking this approach often choose to decode the subjects' desired arm state in the present moment, which causes the prosthetic arm to lag behind the state desired by the user, as the dynamics of the arm (and other control delays) constrain how quickly the controller can change the arm's state. We tested the hypothesis that decoding the subjects' intended future movements would mitigate this lag and improve BMI performance. Offline results show that predictions of future movement (≤ 200 ms) can be made with essentially the same accuracy as predictions of present movement. Online results from one monkey show that performance increases as a function of the future prediction time lead, reaching optimum performance at a time lead equal to the delay inherent in the controlled system.

    View details for DOI 10.1109/EMBC.2012.6346865

    View details for PubMedID 23366826

  • Continuous decoding of intended movements with a hybrid kinetic and kinematic brain machine interface. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference Suminski, A. J., Willett, F. R., Fagg, A. H., Bodenhamer, M., Hatsopoulos, N. G. 2011; 2011: 5802-6

    Abstract

    Although most brain-machine interface (BMI) studies have focused on decoding kinematic parameters of motion, it is known that motor cortical activity also correlates with kinetic signals, including hand force and joint torque. In this experiment, a monkey used a cortically-controlled BMI to move a visual cursor and hit a sequence of randomly placed targets. By varying the contributions of separate kinetic and kinematic decoders to the movement of a virtual arm, we evaluated the hypothesis that a BMI incorporating both signals (Hybrid BMI) would outperform a BMI decoding kinematic information alone (Position BMI). We show that the trajectories generated by the Hybrid BMI during real-time decoding were straighter and smoother than those of the Position BMI. These results may have important implications for BMI applications that require controlling devices with inherent, physical dynamics or applying forces to the environment.

    View details for DOI 10.1109/IEMBS.2011.6091436

    View details for PubMedID 22255659