Max Kasun
Research Professional, Psychiatry and Behavioral Sciences
Bio
Max Kasun works in the Roberts Ethics Lab and Kim Ethics Lab at Stanford, which use empirical methods to help anticipate, clarify and resolve ethical issues in modern biomedical research. He received his BA from the University of Wisconsin-Madison. He has interests in empirical and normative thought dedicated to increasing scientific understanding and societal appreciation of the nature, internal experience, and prevalence of mental illness and well-being, as well as in moral philosophy (e.g. Kantian ethics, justice, action, ethical naturalism, and pragmatism), cognitive and affective sciences, and philosophy of mind (e.g. embodiment and personhood). He has co-authored scientific, peer-reviewed articles and other scholarly work investigating ethical issues in research (e.g. authentic voluntarism in informed consent), medical education, public health, and neuroscience. His most recent contributions to NIH-funded scientific work (National Center for Advancing Translational Sciences; PI: Dr. Jane Kim) have focused on investigating ethical issues encountered in the design, development, and clinical integration of artificial intelligence, e.g., how environmental and cognitive factors shape appraisals of AI tools, clinical judgments, trust, and health decision-making.
Max is a co-author of several chapters in APA's Study Guide to DSM-5-TR (2024) including the chapters on bipolar and related disorders and personality disorders. He has provided editorial support for the peer-reviewed journal Academic Medicine and for two works on the subject of trauma and related interventions (United Nations, Springer). Previously, he served on leadership teams for the Stanford Mental Health Technology and Innovation Hub and Neurodiversity Project.
Max is currently working to develop a new Special Initiative of the Chair on Mental Health Care for Unhoused and Justice-Involved Persons (see https://med.stanford.edu/psychiatry/special-initiatives/mhuj.html). The initiative aims to bring together a community of scholars, public stakeholders, and health care professionals to help advance more humane and participatory inquiry and health policy in service of a population that faces profound controversy, health stigma, and scientific neglect. The initiative aims to improve how science is communicated to the public and policy decision-makers and to develop more evidence-based, pragmatic, strengths-based, and trauma-informed approaches to mental health care for unhoused persons, including those who have experienced episodic or cyclical involvement in the criminal and civil justice systems.
Education & Certifications
-
B.A., University of Wisconsin-Madison (2016)
Professional Interests
moral philosophy
empirical and normative ethics
cognitive sciences
philosophy of mind
aesthetics
public humanities
Professional Affiliations and Activities
-
Member, International Neuroethics Society (2019 - Present)
All Publications
-
Information, collaboration, regulation: Physician and AI researcher views on ethical considerations in clinical AI integration
BIG DATA & SOCIETY
2025; 12 (2)
View details for DOI 10.1177/20539517251343853
View details for Web of Science ID 001497789000001
-
Users' Perceptions and Trust in AI in Direct-to-Consumer mHealth: Qualitative Interview Study.
JMIR mHealth and uHealth
2025; 13: e64715
Abstract
The increasing use of direct-to-consumer artificial intelligence (AI)-enabled mobile health (AI-mHealth) apps presents an opportunity for more effective health management and monitoring and expanded mobile health (mHealth) capabilities. However, AI's early developmental stage has prompted concerns related to trust, privacy, informed consent, and bias, among others. While some of these concerns have been explored in early stakeholder research related to AI-mHealth, the broader landscape of considerations that hold ethical significance to users remains underexplored.Our aim was to document and explore the perspectives of individuals who reported previous experience using mHealth apps and their attitudes and ethically salient considerations regarding direct-to-consumer AI-mHealth apps.As part of a larger study, we conducted semistructured interviews via Zoom with self-reported users of mHealth apps (N=21). Interviews consisted of a series of open-ended questions concerning participants' experiences, attitudes, and values relating to AI-mHealth apps and were conducted until topic saturation was reached. We collaboratively reviewed the interview transcripts and developed a codebook consisting of 37 codes describing recurring or otherwise noteworthy sentiments that inductively arose from the data. A single coder coded all transcripts, and the entire team contributed to conventional qualitative analysis.Our qualitative analysis yielded 3 major categories and 9 subcategories encompassing participants' perspectives. Participants described attitudes toward the impact of AI-mHealth on users' health and personal data (ie, influences on health awareness and management, value for mental vs physical health use cases, and the inevitability of data sharing), influences on their trust in AI-mHealth (ie, endorsements and guidance from health professionals or health or regulatory organizations, attitudes toward technology companies, and reasonable but not necessarily explainable output), and their preferences relating to the amount and type of information that is shared by AI-mHealth apps (ie, the types of data that are collected, future uses of user data, and the accessibility of information).This paper provides additional context relating to a number of concerns previously posited or identified in the AI-mHealth literature, including trust, explainability, and information sharing, and revealed additional considerations that have not been previously documented, that is, users' differentiation between the value of AI-mHealth for physical and mental health use cases and their willingness to extend empathy to nonexplainable AI. To the best of our knowledge, this study is the first to apply an open-ended, qualitative descriptive approach to explore the perspectives of end users of direct-to-consumer AI-mHealth apps.
View details for DOI 10.2196/64715
View details for PubMedID 40392584
-
Recognizing the Work and Well-Being of Residency and Fellowship Program Coordinators
ACADEMIC MEDICINE
2025; 100 (5): 525-526
View details for DOI 10.1097/ACM.0000000000006011
View details for Web of Science ID 001479675200005
-
Academic machine learning researchers' ethical perspectives on algorithm development for health care: a qualitative study.
Journal of the American Medical Informatics Association : JAMIA
2023
Abstract
We set out to describe academic machine learning (ML) researchers' ethical considerations regarding the development of ML tools intended for use in clinical care.We conducted in-depth, semistructured interviews with a sample of ML researchers in medicine (Nā=ā10) as part of a larger study investigating stakeholders' ethical considerations in the translation of ML tools in medicine. We used a qualitative descriptive design, applying conventional qualitative content analysis in order to allow participant perspectives to emerge directly from the data.Every participant viewed their algorithm development work as holding ethical significance. While participants shared positive attitudes toward continued ML innovation, they described concerns related to data sampling and labeling (eg, limitations to mitigating bias; ensuring the validity and integrity of data), and algorithm training and testing (eg, selecting quantitative targets; assessing reproducibility). Participants perceived a need to increase interdisciplinary training across stakeholders and to envision more coordinated and embedded approaches to addressing ethics issues.Participants described key areas where increased support for ethics may be needed; technical challenges affecting clinical acceptability; and standards related to scientific integrity, beneficence, and justice that may be higher in medicine compared to other industries engaged in ML innovation. Our results help shed light on the perspectives of ML researchers in medicine regarding the range of ethical issues they encounter or anticipate in their work, including areas where more attention may be needed to support the successful development and integration of medical ML tools.
View details for DOI 10.1093/jamia/ocad238
View details for PubMedID 38069455
-
Physicians' and Machine Learning Researchers' Perspectives on Ethical Issues in the Early Development of Clinical Machine Learning Tools: Qualitative Interview Study.
JMIR AI
2023; 2: e47449
Abstract
Innovative tools leveraging artificial intelligence (AI) and machine learning (ML) are rapidly being developed for medicine, with new applications emerging in prediction, diagnosis, and treatment across a range of illnesses, patient populations, and clinical procedures. One barrier for successful innovation is the scarcity of research in the current literature seeking and analyzing the views of AI or ML researchers and physicians to support ethical guidance.This study aims to describe, using a qualitative approach, the landscape of ethical issues that AI or ML researchers and physicians with professional exposure to AI or ML tools observe or anticipate in the development and use of AI and ML in medicine.Semistructured interviews were used to facilitate in-depth, open-ended discussion, and a purposeful sampling technique was used to identify and recruit participants. We conducted 21 semistructured interviews with a purposeful sample of AI and ML researchers (n=10) and physicians (n=11). We asked interviewees about their views regarding ethical considerations related to the adoption of AI and ML in medicine. Interviews were transcribed and deidentified by members of our research team. Data analysis was guided by the principles of qualitative content analysis. This approach, in which transcribed data is broken down into descriptive units that are named and sorted based on their content, allows for the inductive emergence of codes directly from the data set.Notably, both researchers and physicians articulated concerns regarding how AI and ML innovations are shaped in their early development (ie, the problem formulation stage). Considerations encompassed the assessment of research priorities and motivations, clarity and centeredness of clinical needs, professional and demographic diversity of research teams, and interdisciplinary knowledge generation and collaboration. Phase-1 ethical issues identified by interviewees were notably interdisciplinary in nature and invited questions regarding how to align priorities and values across disciplines and ensure clinical value throughout the development and implementation of medical AI and ML. Relatedly, interviewees suggested interdisciplinary solutions to these issues, for example, more resources to support knowledge generation and collaboration between developers and physicians, engagement with a broader range of stakeholders, and efforts to increase diversity in research broadly and within individual teams.These qualitative findings help elucidate several ethical challenges anticipated or encountered in AI and ML for health care. Our study is unique in that its use of open-ended questions allowed interviewees to explore their sentiments and perspectives without overreliance on implicit assumptions about what AI and ML currently are or are not. This analysis, however, does not include the perspectives of other relevant stakeholder groups, such as patients, ethicists, industry researchers or representatives, or other health care professionals beyond physicians. Additional qualitative and quantitative research is needed to reproduce and build on these findings.
View details for DOI 10.2196/47449
View details for PubMedID 38875536
View details for PubMedCentralID PMC11041441
-
Factors Influencing Perceived Helpfulness and Participation in Innovative Research: A Pilot Study of Individuals with and without Mood Symptoms.
Ethics & behavior
2022; 32 (7): 601-617
Abstract
Little is known about how individuals with and without mood disorders perceive the inherent risks and helpfulness of participating in innovative psychiatric research, or about the factors that influence their willingness to participate. We conducted an online survey with 80 individuals (self-reported mood disorder [n = 25], self-reported good health [n = 55]) recruited via MTurk. We assessed respondents' perceptions of risk and helpfulness in study vignettes associated with two innovative research projects (intravenous ketamine therapy and wearable devices), as well as their willingness to participate in these projects. Respondents with and without mood disorders perceived risk similarly across projects. Respondents with no mood disorders viewed both projects as more helpful to society than to research volunteers, while respondents with mood disorders viewed the projects as equally helpful to volunteers and society. Individuals with mood disorders perceived ketamine research, and the two projects on average, as more helpful to research volunteers than did individuals without mood disorders. Our findings add to a limited empirical literature on the perspectives of volunteers in innovative psychiatric research.
View details for DOI 10.1080/10508422.2021.1957678
View details for PubMedID 36200069
View details for PubMedCentralID PMC9528999
-
Self-reported influences on willingness to receive COVID-19 vaccines among physically ill, mentally ill, and healthy individuals.
Journal of psychiatric research
2022; 155: 501-510
Abstract
OBJECTIVE: Individuals with mental and physical disorders have been disproportionately affected by adverse health outcomes due to the COVID-19 pandemic, and yet vaccine hesitancy persists despite clear evidence of health benefits. Therefore, our study explored factors influencing willingness to receive a COVID-19 vaccine.METHODS: Individuals with mental illness (n=332), physical illness (n=331), and no health issues (n=328) were recruited via Amazon Mechanical Turk. Participants rated willingness to obtain a fully approved COVID-19 vaccine or a vaccine approved only for experimental/emergency use and influences in six domains upon their views. We examined differences by health status.RESULTS: Participants across groups were moderately willing to receive a COVID-19 vaccine. Perceived risk was negatively associated with willingness. Participants differentiated between vaccine risk by approval stage and were less willing to receive an experimental vaccine. Individuals with mental illness rated risk of both vaccines similarly to healthy individuals. Individuals with physical illness expressed less willingness to receive an experimental vaccine. Domain influences differently affected willingness by health status as well as by vaccine approval status.CONCLUSIONS: Our findings are reassuring regarding the ability of people with mental disorders to appreciate risk in medical decision-making and the ability of people of varied health backgrounds to distinguish between the benefits and risks of clinical care and research, refuting the prevailing notions of psychiatric exceptionalism and therapeutic misconception. Our findings shine a light on potential paths forward to support vaccine acceptance.
View details for DOI 10.1016/j.jpsychires.2022.09.017
View details for PubMedID 36191518
-
Factors Influencing Perceived Helpfulness and Participation in Innovative Research:A Pilot Study of Individuals with and without Mood Symptoms
ETHICS & BEHAVIOR
2021
View details for DOI 10.1080/10508422.2021.1957678
View details for Web of Science ID 000698691500001
-
Perceived protectiveness of research safeguards and influences on willingness to participate in research: A novel MTurk pilot study.
Journal of psychiatric research
2021; 138: 200ā206
Abstract
Little is known about how individuals with mood disorders view the protectiveness of research safeguards, and whether their views affect their willingness to participate in psychiatric research. We conducted an online survey with 80 individuals (self-reported mood disorder [n=25], self-reported good health [n=55]) recruited via MTurk. We assessed respondents' perceptions of the protectiveness of five common research safeguards, as well as their willingness to participate in research that incorporates each safeguard. Perceived protectiveness was strongly related to willingness to participate in research for four of the safeguards. Our findings add to a limited literature on the motivations and perspectives of key stakeholders in psychiatric research.
View details for DOI 10.1016/j.jpsychires.2021.04.005
View details for PubMedID 33865169
https://orcid.org/0000-0002-6364-6234