All Publications


  • How Video Passthrough Headsets Influence Perception of Self and Others. Cyberpsychology, behavior and social networking Santoso, M., Bailenson, J. 2024

    Abstract

    With the increasing adoption of mixed reality (MR) headsets with video passthrough functionality, concerns over perceptual and social effects have surfaced. Building on prior qualitative findings,1 this study quantitatively investigates the impact of video passthrough on users. Forty participants completed a body transfer task twice, once while wearing a headset in video passthrough and once without a headset. Using video passthrough induced simulator sickness, created social absence (another person in the physical room feels less present), altered self-reported body schema, and distorted distance perception. On the other hand, compared with past research that showed perceptual aftereffects from video passthrough, the current study found none. We discuss the broader implications for the widespread adoption of MR headsets and their impact on theories surrounding presence and body transfer.

    View details for DOI 10.1089/cyber.2024.0398

    View details for PubMedID 39436806

  • Impact of Digital Advertising Policy on Harmful Product Promotion: Natural Language Processing Analysis of Skin-Lightening Ads. American journal of preventive medicine Lu, J., Chua, S. N., Kavanaugh, J. R., Prashar, J., Ndip-Agbor, E., Santoso, M., Jackson, D. A., Chakraborty, P., Raffoul, A., Austin, S. B. 2024

    Abstract

    Starting June 30, 2022, Google implemented its revised Inappropriate Content Advertising Policy, targeting discriminatory skin-lightening ads that suggest superiority of certain skin shades. This study evaluates the ad content changes from 2 weeks before to 2 weeks after the policy's enforcement.Text ads from Google searches in eight countries (Bahamas, Germany, India, Malaysia, Mexico, South Africa, United Arab Emirates, and United States) were collected in 2022, totaling 1,974 prepolicy and 3,262 post-policy ads, and analyzed in 2023. A gold standard database was established by two coders who labeled 707 ads, which trained five natural language processing models to label the ads, covering content and target demographics. The descriptive statistics and multivariable logistic models were applied to analyze content before versus after policy implementation, both globally and by country.Vertex AI emerged as the best natural language processing model with the highest F1 score of 0.87. There were significant decreases from pre- to post-policy implementation in the prevalence of labels of "Racial or Ethnic Identification" and "Ingredients: Natural" by 47% and 66%, respectively. Notable differences were identified from pre- to post-policy implementation in India, Mexico, and Germany.The study observed changes in skin-lightening product advertisement labels from pre- to post-policy implementation, both globally and within countries. Considering the influence of digital advertising on colorist norms, assessing digital ad policy changes is crucial for public health surveillance. This study presents a computational method to help monitor digital platform policies for consumer product advertisements that affect public health.

    View details for DOI 10.1016/j.amepre.2024.08.006

    View details for PubMedID 39306774

  • ALGORITHMS, ADDICTION, AND ADOLESCENT MENTAL HEALTH: An Interdisciplinary Study to Inform State-level Policy Action to Protect Youth from the Dangers of Social Media. American journal of law & medicine Costello, N., Sutton, R., Jones, M., Almassian, M., Raffoul, A., Ojumu, O., Salvia, M., Santoso, M., Kavanaugh, J. R., Austin, S. B. 2023; 49 (2-3): 135-172

    Abstract

    A recent Wall Street Journal investigation revealed that TikTok floods child and adolescent users with videos of rapid weight loss methods, including tips on how to consume less than 300 calories a day and promoting a "corpse bride diet," showing emaciated girls with protruding bones. The investigation involved the creation of a dozen automated accounts registered as 13-year-olds and revealed that TikTok algorithms fed adolescents tens of thousands of weight-loss videos within just a few weeks of joining the platform. Emerging research indicates that these practices extend well beyond TikTok to other social media platforms that engage millions of U.S. youth on a daily basis.Social media algorithms that push extreme content to vulnerable youth are linked to an increase in mental health problems for adolescents, including poor body image, eating disorders, and suicidality. Policy measures must be taken to curb this harmful practice. The Strategic Training Initiative for the Prevention of Eating Disorders (STRIPED), a research program based at the Harvard T.H. Chan School of Public Health and Boston Children's Hospital, has assembled a diverse team of scholars, including experts in public health, neuroscience, health economics, and law with specialization in First Amendment law, to study the harmful effects of social media algorithms, identify the economic incentives that drive social media companies to use them, and develop strategies that can be pursued to regulate social media platforms' use of algorithms. For our study, we have examined a critical mass of public health and neuroscience research demonstrating mental health harms to youth. We have conducted a groundbreaking economic study showing nearly $11 billion in advertising revenue is generated annually by social media platforms through advertisements targeted at users 0 to 17 years old, thus incentivizing platforms to continue their harmful practices. We have also examined legal strategies to address the regulation of social media platforms by conducting reviews of federal and state legal precedent and consulting with stakeholders in business regulation, technology, and federal and state government.While nationally the issue is being scrutinized by Congress and the Federal Trade Commission, quicker and more effective legal strategies that would survive constitutional scrutiny may be implemented by states, such as the Age Appropriate Design Code Act recently adopted in California, which sets standards that online services likely to be accessed by children must follow. Another avenue for regulation may be through states mandating that social media platforms submit to algorithm risk audits conducted by independent third parties and publicly disclose the results. Furthermore, Section 230 of the federal Communications Decency Act, which has long shielded social media platforms from liability for wrongful acts, may be circumvented if it is proven that social media companies share advertising revenues with content providers posting illegal or harmful content.Our research team's public health and economic findings combined with our legal analysis and resulting recommendations, provide innovative and viable policy actions that state lawmakers and attorneys general can take to protect youth from the harms of dangerous social media algorithms.

    View details for DOI 10.1017/amj.2023.25

    View details for PubMedID 38344782