All Publications


  • Unsupervised neural network models of the ventral visual stream. Proceedings of the National Academy of Sciences of the United States of America Zhuang, C., Yan, S., Nayebi, A., Schrimpf, M., Frank, M. C., DiCarlo, J. J., Yamins, D. L. 2021; 118 (3)

    Abstract

    Deep neural networks currently provide the best quantitative models of the response patterns of neurons throughout the primate ventral visual stream. However, such networks have remained implausible as a model of the development of the ventral stream, in part because they are trained with supervised methods requiring many more labels than are accessible to infants during development. Here, we report that recent rapid progress in unsupervised learning has largely closed this gap. We find that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today's best supervised methods and that the mapping of these neural network models' hidden layers is neuroanatomically consistent across the ventral stream. Strikingly, we find that these methods produce brain-like representations even when trained solely with real human child developmental data collected from head-mounted cameras, despite the fact that these datasets are noisy and limited. We also find that semisupervised deep contrastive embeddings can leverage small numbers of labeled examples to produce representations with substantially improved error-pattern consistency to human behavior. Taken together, these results illustrate a use of unsupervised learning to provide a quantitative model of a multiarea cortical brain system and present a strong candidate for a biologically plausible computational theory of primate sensory learning.

    View details for DOI 10.1073/pnas.2014196118

    View details for PubMedID 33431673

  • Two Routes to Scalable Credit Assignment without Weight Symmetry Kunin, D., Nayebi, A., Sagastuy-Brena, J., Ganguli, S., Bloom, J. M., Yamins, D. K., Daume, H., Singh, A. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2020
  • From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction Tanaka, H., Nayebi, A., Maheswaranathan, N., McIntosh, L., Baccus, S. A., Ganguli, S., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
  • Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs Kubilius, J., Schrimpf, M., Kar, K., Rajalingham, R., Hong, H., Majaj, N. J., Issa, E. B., Bashivan, P., Prescott-Roy, J., Schmidt, K., Nayebi, A., Bear, D., Yamins, D. K., DiCarlo, J. J., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
  • Task-Driven Convolutional Recurrent Models of the Visual System Nayebi, A., Bear, D., Kubilius, J., Kar, K., Ganguli, S., Sussillo, D., DiCarlo, J. J., Yamins, D. K., Bengio, S., Wallach, H., Larochelle, H., Grauman, K., CesaBianchi, N., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2018
  • Deep Learning Models of the Retinal Response to Natural Scenes. Advances in neural information processing systems McIntosh, L. T., Maheswaranathan, N. n., Nayebi, A. n., Ganguli, S. n., Baccus, S. A. 2016; 29: 1369–77

    Abstract

    A central challenge in sensory neuroscience is to understand neural computations and circuit mechanisms that underlie the encoding of ethologically relevant, natural stimuli. In multilayered neural circuits, nonlinear processes such as synaptic transmission and spiking dynamics present a significant obstacle to the creation of accurate computational models of responses to natural stimuli. Here we demonstrate that deep convolutional neural networks (CNNs) capture retinal responses to natural scenes nearly to within the variability of a cell's response, and are markedly more accurate than linear-nonlinear (LN) models and Generalized Linear Models (GLMs). Moreover, we find two additional surprising properties of CNNs: they are less susceptible to overfitting than their LN counterparts when trained on small amounts of data, and generalize better when tested on stimuli drawn from a different distribution (e.g. between natural scenes and white noise). An examination of the learned CNNs reveals several properties. First, a richer set of feature maps is necessary for predicting the responses to natural scenes compared to white noise. Second, temporally precise responses to slowly varying inputs originate from feedforward inhibition, similar to known retinal mechanisms. Third, the injection of latent noise sources in intermediate layers enables our model to capture the sub-Poisson spiking variability observed in retinal ganglion cells. Fourth, augmenting our CNNs with recurrent lateral connections enables them to capture contrast adaptation as an emergent property of accurately describing retinal responses to natural scenes. These methods can be readily generalized to other sensory modalities and stimulus ensembles. Overall, this work demonstrates that CNNs not only accurately capture sensory circuit responses to natural scenes, but also can yield information about the circuit's internal structure and function.

    View details for PubMedID 28729779

  • Deep Learning Models of the Retinal Response to Natural Scenes McIntosh, L. T., Maheswaranathan, N., Nayebi, A., Ganguli, S., Baccus, S. A., Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2016
  • QUANTUM LOWER BOUND FOR INVERTING A PERMUTATION WITH ADVICE QUANTUM INFORMATION & COMPUTATION Nayebi, A., Aaronson, S., Belovs, A., Trevisan, L. 2015; 15 (11-12): 901-913
  • Practical Intractability: A Critique of the Hypercomputation Movement MINDS AND MACHINES Nayebi, A. 2014; 24 (3): 275-305