All Publications


  • The BabyView camera: Designing a new head-mounted camera to capture children's early social and visual environments. Behavior research methods Long, B., Goodin, S., Kachergis, G., Marchman, V. A., Radwan, S. F., Sparks, R. Z., Xiang, V., Zhuang, C., Hsu, O., Newman, B., Yamins, D. L., Frank, M. C. 2023

    Abstract

    Head-mounted cameras have been used in developmental psychology research for more than a decade to provide a rich and comprehensive view of what infants see during their everyday experiences. However, variation between these devices has limited the field's ability to compare results across studies and across labs. Further, the video data captured by these cameras to date has been relatively low-resolution, limiting how well machine learning algorithms can operate over these rich video data. Here, we provide a well-tested and easily constructed design for a head-mounted camera assembly-the BabyView-developed in collaboration with Daylight Design, LLC., a professional product design firm. The BabyView collects high-resolution video, accelerometer, and gyroscope data from children approximately 6-30 months of age via a GoPro camera custom mounted on a soft child-safety helmet. The BabyView also captures a large, portrait-oriented vertical field-of-view that encompasses both children's interactions with objects and with their social partners. We detail our protocols for video data management and for handling sensitive data from home environments. We also provide customizable materials for onboarding families with the BabyView. We hope that these materials will encourage the wide adoption of the BabyView, allowing the field to collect high-resolution data that can link children's everyday environments with their learning outcomes.

    View details for DOI 10.3758/s13428-023-02206-1

    View details for PubMedID 37656342

    View details for PubMedCentralID 8375006

  • How Well Do Unsupervised Learning Algorithms Model Human Real-time and Life-long Learning? Advances in neural information processing systems Zhuang, C., Xiang, V., Bai, Y., Jia, X., Turk-Browne, N., Norman, K., DiCarlo, J. J., Yamins, D. L. 2022; 35: 22628-22642

    Abstract

    Humans learn from visual inputs at multiple timescales, both rapidly and flexibly acquiring visual knowledge over short periods, and robustly accumulating online learning progress over longer periods. Modeling these powerful learning capabilities is an important problem for computational visual cognitive science, and models that could replicate them would be of substantial utility in real-world computer vision settings. In this work, we establish benchmarks for both real-time and life-long continual visual learning. Our real-time learning benchmark measures a model's ability to match the rapid visual behavior changes of real humans over the course of minutes and hours, given a stream of visual inputs. Our life-long learning benchmark evaluates the performance of models in a purely online learning curriculum obtained directly from child visual experience over the course of years of development. We evaluate a spectrum of recent deep self-supervised visual learning algorithms on both benchmarks, finding that none of them perfectly match human performance, though some algorithms perform substantially better than others. Interestingly, algorithms embodying recent trends in self-supervised learning - including BYOL, SwAV and MAE - are substantially worse on our benchmarks than an earlier generation of self-supervised algorithms such as SimCLR and MoCo-v2. We present analysis indicating that the failure of these newer algorithms is primarily due to their inability to handle the kind of sparse low-diversity datastreams that naturally arise in the real world, and that actively leveraging memory through negative sampling - a mechanism eschewed by these newer algorithms - appears useful for facilitating learning in such low-diversity environments. We also illustrate a complementarity between the short and long timescales in the two benchmarks, showing how requiring a single learning algorithm to be locally context-sensitive enough to match real-time learning changes while stable enough to avoid catastrophic forgetting over the long term induces a trade-off that human-like algorithms may have to straddle. Taken together, our benchmarks establish a quantitative way to directly compare learning between neural networks models and human learners, show how choices in the mechanism by which such algorithms handle sample comparison and memory strongly impact their ability to match human learning abilities, and expose an open problem space for identifying more flexible and robust visual self-supervision algorithms.

    View details for PubMedID 38435074

    View details for PubMedCentralID PMC10906807