Professional Education


  • Master of Science, Ecole Normale Superieure (2011)
  • Doctor of Philosophy, Harvard University (2017)
  • Bachelor of Arts, Stanford University, HUMBI-BAH (2009)

All Publications


  • Developmental changes in drawing production under different memory demands in a U.S. and Chinese sample. Developmental psychology Long, B., Wang, Y., Christie, S., Frank, M. C., Fan, J. E. 2023; 59 (10): 1784-1793

    Abstract

    Children's drawings of common object categories become dramatically more recognizable across childhood. What are the major factors that drive developmental changes in children's drawings? To what degree are children's drawings a product of their changing internal category representations versus limited by their visuomotor abilities or their ability to recall the relevant visual information? To explore these questions, we examined the degree to which developmental changes in drawing recognizability vary across different drawing tasks that vary in memory demands (i.e., drawing from verbal vs. picture cues) and with children's shape-tracing abilities across two geographical locations (San Jose, United States, and Beijing, China). We collected digital shape tracings and drawings of common object categories (e.g., cat, airplane) from 4- to 9-year-olds (N = 253). The developmental trajectory of drawing recognizability was remarkably similar when children were asked to draw from pictures versus verbal cues and across these two geographical locations. In addition, our Beijing sample produced more recognizable drawings but showed similar tracing abilities to children from San Jose. Overall, this work suggests that the developmental trajectory of children's drawings is remarkably consistent and not easily explainable by changes in visuomotor control or working memory; instead, changes in children's drawings over development may at least partly reflect changes in the internal representations of object categories. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

    View details for DOI 10.1037/dev0001600

    View details for PubMedID 37768614

  • The BabyView camera: Designing a new head-mounted camera to capture children's early social and visual environments. Behavior research methods Long, B., Goodin, S., Kachergis, G., Marchman, V. A., Radwan, S. F., Sparks, R. Z., Xiang, V., Zhuang, C., Hsu, O., Newman, B., Yamins, D. L., Frank, M. C. 2023

    Abstract

    Head-mounted cameras have been used in developmental psychology research for more than a decade to provide a rich and comprehensive view of what infants see during their everyday experiences. However, variation between these devices has limited the field's ability to compare results across studies and across labs. Further, the video data captured by these cameras to date has been relatively low-resolution, limiting how well machine learning algorithms can operate over these rich video data. Here, we provide a well-tested and easily constructed design for a head-mounted camera assembly-the BabyView-developed in collaboration with Daylight Design, LLC., a professional product design firm. The BabyView collects high-resolution video, accelerometer, and gyroscope data from children approximately 6-30 months of age via a GoPro camera custom mounted on a soft child-safety helmet. The BabyView also captures a large, portrait-oriented vertical field-of-view that encompasses both children's interactions with objects and with their social partners. We detail our protocols for video data management and for handling sensitive data from home environments. We also provide customizable materials for onboarding families with the BabyView. We hope that these materials will encourage the wide adoption of the BabyView, allowing the field to collect high-resolution data that can link children's everyday environments with their learning outcomes.

    View details for DOI 10.3758/s13428-023-02206-1

    View details for PubMedID 37656342

    View details for PubMedCentralID 8375006

  • Contributions of early and mid-level visual cortex to high-level object categorization. bioRxiv : the preprint server for biology Kramer, L. E., Chen, Y. C., Long, B., Konkle, T., Cohen, M. R. 2023

    Abstract

    The complexity of visual features for which neurons are tuned increases from early to late stages of the ventral visual stream. Thus, the standard hypothesis is that high-level functions like object categorization are primarily mediated by higher visual areas because they require more complex image formats that are not evident in early visual processing stages. However, human observers can categorize images as objects or animals or as big or small even when the images preserve only some low- and mid-level features but are rendered unidentifiable ('texforms', Long et al., 2018). This observation suggests that even the early visual cortex, in which neurons respond to simple stimulus features, may already encode signals about these more abstract high-level categorical distinctions. We tested this hypothesis by recording from populations of neurons in early and mid-level visual cortical areas while rhesus monkeys viewed texforms and their unaltered source stimuli (simultaneous recordings from areas V1 and V4 in one animal and separate recordings from V1 and V4 in two others). Using recordings from a few dozen neurons, we could decode the real-world size and animacy of both unaltered images and texforms. Furthermore, this neural decoding accuracy across stimuli was related to the ability of human observers to categorize texforms by real-world size and animacy. Our results demonstrate that neuronal populations early in the visual hierarchy contain signals useful for higher-level object perception and suggest that the responses of early visual areas to simple stimulus features display preliminary untangling of higher-level distinctions.

    View details for DOI 10.1101/2023.05.31.541514

    View details for PubMedID 37398251

    View details for PubMedCentralID PMC10312552

  • How games can make behavioural science better NATURE Long, B., Simson, J., Buxo-Lugo, A., Watson, D. G., Mehr, S. A. 2023; 613 (7944): 433-436

    View details for Web of Science ID 000928175300017

    View details for PubMedID 36650244

  • A longitudinal analysis of the social information in infants' naturalistic visual experience using automated detections. Developmental psychology Long, B. L., Kachergis, G., Agrawal, K., Frank, M. C. 2022

    Abstract

    The faces and hands of caregivers and other social partners offer a rich source of social and causal information that is likely critical for infants' cognitive and linguistic development. Previous work using manual annotation strategies and cross-sectional data has found systematic changes in the proportion of faces and hands in the egocentric perspective of young infants. Here, we validated the use of a modern convolutional neural network (OpenPose) for the detection of faces and hands in naturalistic egocentric videos. We then applied this model to a longitudinal collection of more than 1,700 head-mounted camera videos from three children ages 6 to 32 months. Using these detections, we confirm and extend prior results from cross-sectional studies. First, we found a moderate decrease in the proportion of faces in children's view across age and a higher proportion of hands in view than previously reported. Second, we found variability in the proportion of faces and hands viewed by different children in different locations (e.g., living room vs. kitchen), suggesting that individual activity contexts may shape the social information that infants experience. Third, we found evidence that children may see closer, larger views of people, hands, and faces earlier in development. These longitudinal analyses provide an additional perspective on the changes in the social information in view across the first few years of life and suggest that pose detection models can successfully be applied to naturalistic egocentric video data sets to extract descriptives about infants' changing social environment. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

    View details for DOI 10.1037/dev0001414

    View details for PubMedID 36227287

  • Peekbank: An open, large-scale repository for developmental eye-tracking data of children's word recognition. Behavior research methods Zettersten, M., Yurovsky, D., Xu, T. L., Uner, S., Tsui, A. S., Schneider, R. M., Saleh, A. N., Meylan, S. C., Marchman, V. A., Mankewitz, J., MacDonald, K., Long, B., Lewis, M., Kachergis, G., Handa, K., deMayo, B., Carstensen, A., Braginsky, M., Boyce, V., Bhatt, N. S., Bergey, C. A., Frank, M. C. 2022

    Abstract

    The ability to rapidly recognize words and link them to referents is central to children's early language development. This ability, often called word recognition in the developmental literature, is typically studied in the looking-while-listening paradigm, which measures infants' fixation on a target object (vs. a distractor) after hearing a target label. We present a large-scale, open database of infant and toddler eye-tracking data from looking-while-listening tasks. The goal of this effort is to address theoretical and methodological challenges in measuring vocabulary development. We first present how we created the database, its features and structure, and associated tools for processing and accessing infant eye-tracking datasets. Using these tools, we then work through two illustrative examples to show how researchers can use Peekbank to interrogate theoretical and methodological questions about children's developing word recognition ability.

    View details for DOI 10.3758/s13428-022-01906-4

    View details for PubMedID 36002623

  • Automated detections reveal the social information in the changing infant view. Child development Long, B. L., Sanchez, A., Kraus, A. M., Agrawal, K., Frank, M. C. 2021

    Abstract

    How do postural developments affect infants' access to social information? We recorded egocentric and third-person video while infants and their caregivers (N=36, 8- to 16-month-olds, N=19 females) participated in naturalistic play sessions. We then validated the use of a neural network pose detection model to detect faces and hands in the infant view. We used this automated method to analyze our data and a prior egocentric video dataset (N=17, 12-month-olds). Infants' average posture and orientation with respect to their caregiver changed dramatically across this age range; both posture and orientation modulated access to social information. Together, these results confirm that infant's ability to move and act on the world plays a significant role in shaping the social information in their view.

    View details for DOI 10.1111/cdev.13648

    View details for PubMedID 34787894

  • Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: an observational study. Royal Society open science Hardwicke, T. E., Bohn, M., MacDonald, K., Hembacher, E., Nuijten, M. B., Peloquin, B. N., deMayo, B. E., Long, B., Yoon, E. J., Frank, M. C. 2021; 8 (1): 201494

    Abstract

    For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015. Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one 'major numerical discrepancy' (>10% difference) prompting us to request input from original authors. Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles. Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected. Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility.

    View details for DOI 10.1098/rsos.201494

    View details for PubMedID 33614084

    View details for PubMedCentralID PMC7890505

  • Animacy and object size are reflected in perceptual similarity computations by the preschool years VISUAL COGNITION Long, B., Moher, M., Carey, S. E., Konkle, T. 2019
  • Real-World Size Is Automatically Encoded in Preschoolers' Object Representations JOURNAL OF EXPERIMENTAL PSYCHOLOGY-HUMAN PERCEPTION AND PERFORMANCE Long, B., Moher, M., Carey, S., Konkle, T. 2019; 45 (7): 863–76

    Abstract

    When adults see a picture of an object, they automatically process how big the object typically is in the real world (Konkle & Oliva, 2012a). How much life experience is needed for this automatic size processing to emerge? Here, we ask whether preschoolers show this same signature of automatic size processing. We showed 3- and 4-year-olds displays with two pictures of objects and asked them to touch the picture that was smaller on the screen. Critically, the relative visual sizes of the objects could be either congruent with their relative real-world sizes (e.g., a small picture of a shoe next to a big picture of a car) or incongruent with their relative real-world sizes (e.g., a big picture of a shoe next to a small picture of a car). Across two experiments, we found that preschoolers were worse at making visual size judgments on incongruent trials, suggesting that real-world size was automatically activated and interfered with their performance. In addition, we found that both 4-year-olds and adults showed similar item-pair effects (i.e., showed larger Size-Stroop effects for a given pair of items, relative to other pairs). Furthermore, the magnitude of the item-pair Stroop effects in 4-year-olds did not depend on whether they could recognize the pictured objects, suggesting that the perceptual features of these objects were sufficient to trigger the processing of real-world size information. These results indicate that, by 3-4 years of age, children automatically extract real-world size information from depicted objects. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

    View details for DOI 10.1037/xhp0000619

    View details for Web of Science ID 000473023200003

    View details for PubMedID 30985176

  • Mid-level visual features underlie the high-level categorical organization of the ventral stream PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA Long, B., Yu, C., Konkle, T. 2018; 115 (38): E9015–E9024

    Abstract

    Human object-selective cortex shows a large-scale organization characterized by the high-level properties of both animacy and object size. To what extent are these neural responses explained by primitive perceptual features that distinguish animals from objects and big objects from small objects? To address this question, we used a texture synthesis algorithm to create a class of stimuli-texforms-which preserve some mid-level texture and form information from objects while rendering them unrecognizable. We found that unrecognizable texforms were sufficient to elicit the large-scale organizations of object-selective cortex along the entire ventral pathway. Further, the structure in the neural patterns elicited by texforms was well predicted by curvature features and by intermediate layers of a deep convolutional neural network, supporting the mid-level nature of the representations. These results provide clear evidence that a substantial portion of ventral stream organization can be accounted for by coarse texture and form information without requiring explicit recognition of intact objects.

    View details for PubMedID 30171168

  • Data availability, reusability, and analytic reproducibility: evaluating the impact of a mandatory open data policy at the journal Cognition ROYAL SOCIETY OPEN SCIENCE Hardwicke, T. E., Mathur, M. B., MacDonald, K., Nilsonne, G., Banks, G. C., Kidwell, M. C., Mohr, A., Clayton, E., Yoon, E. J., Tessler, M., Lenne, R. L., Altman, S., Long, B., Frank, M. C. 2018; 5 (8)
  • Data availability, reusability, and analytic reproducibility: evaluating the impact of a mandatory open data policy at the journal Cognition. Royal Society open science Hardwicke, T. E., Mathur, M. B., MacDonald, K., Nilsonne, G., Banks, G. C., Kidwell, M. C., Hofelich Mohr, A., Clayton, E., Yoon, E. J., Henry Tessler, M., Lenne, R. L., Altman, S., Long, B., Frank, M. C. 2018; 5 (8): 180448

    Abstract

    Access to data is a critical feature of an efficient, progressive and ultimately self-correcting scientific ecosystem. But the extent to which in-principle benefits of data sharing are realized in practice is unclear. Crucially, it is largely unknown whether published findings can be reproduced by repeating reported analyses upon shared data ('analytic reproducibility'). To investigate this, we conducted an observational evaluation of a mandatory open data policy introduced at the journal Cognition. Interrupted time-series analyses indicated a substantial post-policy increase in data available statements (104/417, 25% pre-policy to 136/174, 78% post-policy), although not all data appeared reusable (23/104, 22% pre-policy to 85/136, 62%, post-policy). For 35 of the articles determined to have reusable data, we attempted to reproduce 1324 target values. Ultimately, 64 values could not be reproduced within a 10% margin of error. For 22 articles all target values were reproduced, but 11 of these required author assistance. For 13 articles at least one value could not be reproduced despite author assistance. Importantly, there were no clear indications that original conclusions were seriously impacted. Mandatory open data policies can increase the frequency and quality of data sharing. However, suboptimal data curation, unclear analysis specification and reporting errors can impede analytic reproducibility, undermining the utility of data sharing and the credibility of scientific findings.

    View details for DOI 10.1098/rsos.180448

    View details for PubMedID 30225032

    View details for PubMedCentralID PMC6124055

  • Constructing agency: the role of language FRONTIERS IN PSYCHOLOGY Fausey, C. M., Long, B. L., Inamori, A., Boroditsky, L. 2010; 1

    Abstract

    Is agency a straightforward and universal feature of human experience? Or is the construction of agency (including attention to and memory for people involved in events) guided by patterns in culture? In this paper we focus on one aspect of cultural experience: patterns in language. We examined English and Japanese speakers' descriptions of intentional and accidental events. English and Japanese speakers described intentional events similarly, using mostly agentive language (e.g., "She broke the vase"). However, when it came to accidental events English speakers used more agentive language than did Japanese speakers. We then tested whether these different patterns found in language may also manifest in cross-cultural differences in attention and memory. Results from a non-linguistic memory task showed that English and Japanese speakers remembered the agents of intentional events equally well. However, English speakers remembered the agents of accidents better than did Japanese speakers, as predicted from patterns in language. Further, directly manipulating agency in language during another laboratory task changed people's eye-witness memory, confirming a possible causal role for language. Patterns in one's linguistic environment may promote and support how people instantiate agency in context.

    View details for DOI 10.3389/fpsyg.2010.00162

    View details for Web of Science ID 000208849100059

    View details for PubMedCentralID PMC3153776