Academic Appointments


  • Instructor, Ophthalmology

All Publications


  • Robust deep learning object recognition models rely on low frequency information in natural images. PLoS computational biology Li, Z., Ortega Caro, J., Rusak, E., Brendel, W., Bethge, M., Anselmi, F., Patel, A. B., Tolias, A. S., Pitkow, X. 2023; 19 (3): e1010932

    Abstract

    Machine learning models have difficulty generalizing to data outside of the distribution they were trained on. In particular, vision models are usually vulnerable to adversarial attacks or common corruptions, to which the human visual system is robust. Recent studies have found that regularizing machine learning models to favor brain-like representations can improve model robustness, but it is unclear why. We hypothesize that the increased model robustness is partly due to the low spatial frequency preference inherited from the neural representation. We tested this simple hypothesis with several frequency-oriented analyses, including the design and use of hybrid images to probe model frequency sensitivity directly. We also examined many other publicly available robust models that were trained on adversarial images or with data augmentation, and found that all these robust models showed a greater preference to low spatial frequency information. We show that preprocessing by blurring can serve as a defense mechanism against both adversarial attacks and common corruptions, further confirming our hypothesis and demonstrating the utility of low spatial frequency information in robust object recognition.

    View details for DOI 10.1371/journal.pcbi.1010932

    View details for PubMedID 36972288

    View details for PubMedCentralID PMC10079058

  • Learning From Brains How to Regularize Machines Li, Z., Brendel, W., Walker, E. Y., Cobos, E., Muhammad, T., Reimer, J., Bethge, M., Sinz, F. H., Pitkow, X., Tolias, A. S., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
  • A Single, Continuously Applied Control Policy for Modeling Reaching Movements with and without Perturbation. Neural computation Li, Z., Mazzoni, P., Song, S., Qian, N. 2018; 30 (2): 397-427

    Abstract

    It has been debated whether kinematic features, such as the number of peaks or decomposed submovements in a velocity profile, indicate the number of discrete motor impulses or result from a continuous control process. The debate is particularly relevant for tasks involving target perturbation, which can alter movement kinematics. To simulate such tasks, finite-horizon models require two preset movement durations to compute two control policies before and after the perturbation. Another model employs infinite- and finite-horizon formulations to determine, respectively, movement durations and control policies, which are updated every time step. We adopted an infinite-horizon optimal feedback control model that, unlike previous approaches, does not preset movement durations or use multiple control policies. It contains both control-dependent and independent noises in system dynamics, state-dependent and independent noises in sensory feedbacks, and different delays and noise levels for visual and proprioceptive feedbacks. We analytically derived an optimal solution that can be applied continuously to move an effector toward a target regardless of whether, when, or where the target jumps. This single policy produces different numbers of peaks and "submovements" in velocity profiles for different conditions and trials. Movements that are slower or perturbed later appear to have more submovements. The model is also consistent with the observation that subjects can perform the perturbation task even without detecting the target jump or seeing their hands during reaching. Finally, because the model incorporates Weber's law via a state representation relative to the target, it explains why initial and terminal visual feedback are, respectively, less and more effective in improving end-point accuracy. Our work suggests that the number of peaks or submovements in a velocity profile does not necessarily reflect the number of motor impulses and that the difference between initial and terminal feedback does not necessarily imply a transition between open- and closed-loop strategies.

    View details for DOI 10.1162/neco_a_01040

    View details for PubMedID 29162001

  • Primary Visual Cortex as a Saliency Map: A Parameter-Free Prediction and Its Test by Behavioral Data. PLoS computational biology Zhaoping, L., Zhe, L. 2015; 11 (10): e1004375

    Abstract

    It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis' first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton's location can be measured by the shortness of the reaction time in a visual search for singletons. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The prediction matches human reaction time data. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functions like visual decoding and endogenous attention.

    View details for DOI 10.1371/journal.pcbi.1004375

    View details for PubMedID 26441341

    View details for PubMedCentralID PMC4595278