All Publications


  • Not in my AI: Moral engagement and disengagement in health care AI development. Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing Nichol, A. A., Halley, M. C., Federico, C. A., Cho, M. K., Sankar, P. L. 2023; 28: 496-506

    Abstract

    Machine learning predictive analytics (MLPA) are utilized increasingly in health care, but can pose harms to patients, clinicians, health systems, and the public. The dynamic nature of this technology creates unique challenges to evaluating safety and efficacy and minimizing harms. In response, regulators have proposed an approach that would shift more responsibility to MLPA developers for mitigating potential harms. To be effective, this approach requires MLPA developers to recognize, accept, and act on responsibility for mitigating harms. In interviews of 40 MLPA developers of health care applications in the United States, we found that a subset of ML developers made statements reflecting moral disengagement, representing several different potential rationales that could create distance between personal accountability and harms. However, we also found a different subset of ML developers who expressed recognition of their role in creating potential hazards, the moral weight of their design decisions, and a sense of responsibility for mitigating harms. We also found evidence of moral conflict and uncertainty about responsibility for averting harms as an individual developer working in a company. These findings suggest possible facilitators and barriers to the development of ethical ML that could act through encouragement of moral engagement or discouragement of moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them.

    View details for PubMedID 36541003

  • Stronger regulation of AI in biomedicine. Science translational medicine Trotsyuk, A. A., Federico, C. A., Cho, M. K., Altman, R. B., Magnus, D. 2023; 15 (713): eadi0336

    Abstract

    Regulatory agencies need to ensure the safety and equity of AI in biomedicine, and the time to do so is now.

    View details for DOI 10.1126/scitranslmed.adi0336

    View details for PubMedID 37703349

  • Ethical and epistemic issues in the design and conduct of pragmatic stepped-wedge cluster randomized clinical trials. Contemporary clinical trials Federico, C. A., Heagerty, P. J., Lantos, J., O'Rourke, P., Rahimzadeh, V., Sugarman, J., Weinfurt, K., Wendler, D., Wilfond, B. S., Magnus, D. 2022: 106703

    Abstract

    Stepped-wedge cluster randomized trial (SW-CRT) designs are increasingly employed in pragmatic research; they differ from traditional parallel cluster randomized trials in which an intervention is delivered to a subset of clusters, but not to all. In a SW-CRT, all clusters receive the intervention under investigation by the end of the study. This approach is thought to avoid ethical concerns about the denial of a desired intervention to participants in control groups. Such concerns have been cited in the literature as a primary motivation for choosing SW-CRT design, however SW-CRTs raise additional ethical concerns related to the delayed implementation of an intervention and consent. Yet, PCT investigators may choose SW-CRT designs simply because they are concerned that other study designs are infeasible. In this paper, we examine justifications for the use of SW-CRT study design, over other designs, by drawing on the experience of the National Institutes of Health's Health Care Systems Research Collaboratory (NIH Collaboratory) with five pragmatic SW-CRTs. We found that decisions to use SW-CRT design were justified by practical and epistemic reasons rather than ethical ones. These include concerns about feasibility, the heterogeneity of cluster characteristics, and the desire for simultaneous clinical evaluation and implementation. In this paper we compare the potential benefits of SW-CRTs against the ethical and epistemic challenges brought forth by the design and suggest that the choice of SW-CRT design must balance epistemic, feasibility and ethical justifications. Moreover, given their complexity, such studies need rigorous and informed ethical oversight.

    View details for DOI 10.1016/j.cct.2022.106703

    View details for PubMedID 35176501

  • Reducing barriers to ethics in neuroscience FRONTIERS IN HUMAN NEUROSCIENCE Illes, J., Tairyan, K., Federico, C. A., Tabet, A., Glover, G. H. 2010; 4

    Abstract

    Ethics is a growing interest for neuroscientists, but rather than signifying a commitment to the protection of human subjects, care of animals, and public understanding to which the professional community is engaged in a fundamental way, interest has been consumed by administrative overhead and the mission creep of institutional ethics reviews. Faculty, trainees, and staff (nā€‰=ā€‰605) in North America whose work involves brain imaging and brain stimulation completed an online survey about ethics in their research. Using factor analysis and linear regression, we found significant effects for invasiveness of imaging technique, professional position, gender, and local presence of bioethics centers. We propose strategies for improving communication between the neuroscience community and ethics review boards, collaborations between neuroscientists and biomedical ethicists, and ethics training in graduate neuroscience programs to revitalize mutual goals and interests.

    View details for DOI 10.3389/fnhum.2010.00167

    View details for Web of Science ID 000289308000001

    View details for PubMedID 20953291

    View details for PubMedCentralID PMC2955400