Nicole Martinez-Martin
Assistant Professor (Research) of Pediatrics (Biomedical Ethics) and, by courtesy, of Psychiatry and Behavioral Sciences (Child & Adolescent Psychiatry & Child Development)
Pediatrics - Center for Biomedical Ethics
Bio
Nicole Martinez-Martin received her JD from Harvard Law School and her doctorate in social sciences (comparative development/medical anthropology) from the University of Chicago. Her broader research interests concern the impact of new technologies on the treatment of vulnerable populations. Her graduate research included the study of cross-cultural approaches to mental health services in the Latine community and the use neuroscience in criminal cases. Her recent work in bioethics and neuroethics has focused on the ethics of AI and digital health technology, such as digital phenotyping or computer vision, for medical and behavioral applications.
She has served as PI for research projects examining ethical issues regarding machine learning in health care, digital health technology, digital contact tracing, and digital phenotyping. She has examined policy and regulatory issues related to privacy and data governance, bias and oversight of machine learning and digital health technology. Her K01 career development grant, funded through NIMH, focuses on the ethics of machine learning and digital mental health technology. Recent research has included examining bias, equity and inclusion as it pertains to machine learning and digital health, as well as social implications of privacy and data protections on marginalized groups.
Academic Appointments
-
Assistant Professor (Research), Pediatrics - Center for Biomedical Ethics
-
Assistant Professor (Research) (By courtesy), Psychiatry and Behavioral Sciences - Child & Adolescent Psychiatry and Child Development
-
Member, Wu Tsai Neurosciences Institute
Boards, Advisory Committees, Professional Organizations
-
Board Member, International Neuroethics Society (2023 - Present)
-
Diversity & Inclusion Task Force, International Neuroethics Society (2020 - 2022)
-
Neuroethics Framework - Legal System Working Group, Co-Chair, IEEE (2020 - Present)
-
Ethics Committee, International Society for Psychiatric Genetics (2019 - Present)
Current Research and Scholarly Interests
NIH/National Institute of Mental Health
K01 MH118375-01A1
“Ethical, Legal and Social Implications in the Use of Digital Technology for Mental Health Applications”
Greenwall Foundation Making a Difference in Bioethics Grant
“Ethical, Legal and Social Implications of Digital Phenotyping”
2024-25 Courses
- Equity & Justice in Biotechnologies: Who Benefits and Who Is Left Behind
COLLEGE 117 (Spr) - Introduction to Science, Technology & Society
STS 1 (Win) - The Ethics of Innovative Life-Saving Technologies for Children with Heart Disease
STS 115 (Win) -
Independent Studies (1)
- Advanced Individual Work
STS 299 (Aut)
- Advanced Individual Work
-
Prior Year Courses
2023-24 Courses
- Introduction to Science, Technology & Society
STS 1 (Win) - Where Does it Hurt?: Medicine and Suffering in Global Context
COLLEGE 108 (Spr)
2022-23 Courses
- Introduction to Science, Technology & Society
STS 1 (Win) - Where Does it Hurt?: Medicine and Suffering in Global Context
COLLEGE 108 (Spr)
2021-22 Courses
- Introduction to Science, Technology & Society
CSRE 1T, STS 1 (Spr)
- Introduction to Science, Technology & Society
All Publications
-
Rationales and Approaches to Protecting Brain Data: a Scoping Review
NEUROETHICS
2024; 17 (1)
View details for DOI 10.1007/s12152-023-09534-1
View details for Web of Science ID 001100648300001
-
Re-thinking the Ethics of International Bioethics Conferencing.
The American journal of bioethics : AJOB
2024; 24 (4): 55-57
View details for DOI 10.1080/15265161.2024.2308128
View details for PubMedID 38529987
-
Peer review of GPT-4 technical report and systems card.
PLOS digital health
2024; 3 (1): e0000417
Abstract
The study provides a comprehensive review of OpenAI's Generative Pre-trained Transformer 4 (GPT-4) technical report, with an emphasis on applications in high-risk settings like healthcare. A diverse team, including experts in artificial intelligence (AI), natural language processing, public health, law, policy, social science, healthcare research, and bioethics, analyzed the report against established peer review guidelines. The GPT-4 report shows a significant commitment to transparent AI research, particularly in creating a systems card for risk assessment and mitigation. However, it reveals limitations such as restricted access to training data, inadequate confidence and uncertainty estimations, and concerns over privacy and intellectual property rights. Key strengths identified include the considerable time and economic investment in transparent AI research and the creation of a comprehensive systems card. On the other hand, the lack of clarity in training processes and data raises concerns about encoded biases and interests in GPT-4. The report also lacks confidence and uncertainty estimations, crucial in high-risk areas like healthcare, and fails to address potential privacy and intellectual property issues. Furthermore, this study emphasizes the need for diverse, global involvement in developing and evaluating large language models (LLMs) to ensure broad societal benefits and mitigate risks. The paper presents recommendations such as improving data transparency, developing accountability frameworks, establishing confidence standards for LLM outputs in high-risk settings, and enhancing industry research review processes. It concludes that while GPT-4's report is a step towards open discussions on LLMs, more extensive interdisciplinary reviews are essential for addressing bias, harm, and risk concerns, especially in high-risk domains. The review aims to expand the understanding of LLMs in general and highlights the need for new reflection forms on how LLMs are reviewed, the data required for effective evaluation, and addressing critical issues like bias and risk.
View details for DOI 10.1371/journal.pdig.0000417
View details for PubMedID 38236824
View details for PubMedCentralID PMC10795998
-
Delivering on NIH data sharing requirements: avoiding Open Data in Appearance Only.
BMJ health & care informatics
2023; 30 (1)
Abstract
Introduction In January, the National Institutes of Health (NIH) implemented a Data Management and Sharing Policy aiming to leverage data collected during NIH-funded research. The COVID-19 pandemic illustrated that this practice is equally vital for augmenting patient research. In addition, data sharing acts as a necessary safeguard against the introduction of analytical biases. While the pandemic provided an opportunity to curtail critical research issues such as reproducibility and validity through data sharing, this did not materialise in practice and became an example of 'Open Data in Appearance Only' (ODIAO). Here, we define ODIAO as the intent of data sharing without the occurrence of actual data sharing (eg, material or digital data transfers).Objective Propose a framework that states the main risks associated with data sharing, systematically present risk mitigation strategies and provide examples through a healthcare lens.Methods This framework was informed by critical aspects of both the Open Data Institute and the NIH's 2023 Data Management and Sharing Policy plan guidelines.Results Through our examination of legal, technical, reputational and commercial categories, we find barriers to data sharing ranging from misinterpretation of General Data Privacy Rule to lack of technical personnel able to execute large data transfers. From this, we deduce that at numerous touchpoints, data sharing is presently too disincentivised to become the norm.Conclusion In order to move towards Open Data, we propose the creation of mechanisms for incentivisation, beginning with recentring data sharing on patient benefits, additional clauses in grant requirements and committees to encourage adherence to data reporting practices.
View details for DOI 10.1136/bmjhci-2023-100771
View details for PubMedID 37344002
-
Returning Individual Research Results from Digital Phenotyping in Psychiatry.
The American journal of bioethics : AJOB
2023: 1-22
Abstract
Psychiatry is rapidly adopting digital phenotyping and artificial intelligence/machine learning tools to study mental illness based on tracking participants' locations, online activity, phone and text message usage, heart rate, sleep, physical activity, and more. Existing ethical frameworks for return of individual research results (IRRs) are inadequate to guide researchers for when, if, and how to return this unprecedented number of potentially sensitive results about each participant's real-world behavior. To address this gap, we convened an interdisciplinary expert working group, supported by a National Institute of Mental Health grant. Building on established guidelines and the emerging norm of returning results in participant-centered research, we present a novel framework specific to the ethical, legal, and social implications of returning IRRs in digital phenotyping research. Our framework offers researchers, clinicians, and Institutional Review Boards (IRBs) urgently needed guidance, and the principles developed here in the context of psychiatry will be readily adaptable to other therapeutic areas.
View details for DOI 10.1080/15265161.2023.2180109
View details for PubMedID 37155651
-
Viewing CAI as a Tool Within the Mental Health Care System
AMERICAN JOURNAL OF BIOETHICS
2023; 23 (5): 57-59
View details for DOI 10.1080/15265161.2023.2191058
View details for Web of Science ID 000981501700020
View details for PubMedID 37130393
-
Passive monitoring by smart toilets for precision health.
Science translational medicine
2023; 15 (681): eabk3489
Abstract
Smart toilets are a key tool for enabling precision health monitoring in the home, but such passive monitoring has ethical considerations.
View details for DOI 10.1126/scitranslmed.abk3489
View details for PubMedID 36724240
-
Epistemic Rights and Responsibilities of Digital Simulacra for Biomedicine.
The American journal of bioethics : AJOB
2022: 1-12
Abstract
Big data and AI have enabled digital simulation for prediction of future health states or behaviors of specific individuals, populations or humans in general. "Digital simulacra" use multimodal datasets to develop computational models that are virtual representations of people or groups, generating predictions of how systems evolve and react to interventions over time. These include digital twins and virtual patients for in silico clinical trials, both of which seek to transform research and health care by speeding innovation and bridging the epistemic gap between population-based research findings and their application to the individual. Nevertheless, digital simulacra mark a major milestone on a trajectory to embrace the epistemic culture of data science and a potential abandonment of medical epistemological concepts of causality and representation. In doing so, "data first" approaches potentially shift moral attention from actual patients and principles, such as equity, to simulated patients and patient data.
View details for DOI 10.1080/15265161.2022.2146785
View details for PubMedID 36507873
-
Envisioning a Path toward Equitable and Effective Digital Mental Health.
AJOB neuroscience
2022; 13 (3): 196-198
View details for DOI 10.1080/21507740.2022.2082597
View details for PubMedID 35797130
-
Bridging the AI Chasm: Can EBM Address Representation and Fairness in Clinical Machine Learning?
The American journal of bioethics : AJOB
2022; 22 (5): 30-32
View details for DOI 10.1080/15265161.2022.2055212
View details for PubMedID 35475967
-
Ethical Development of Digital Phenotyping Tools for Mental Health Applications: Delphi Study.
JMIR mHealth and uHealth
2021; 9 (7): e27343
Abstract
BACKGROUND: Digital phenotyping (also known as personal sensing, intelligent sensing, or body computing) involves the collection of biometric and personal data in situ from digital devices, such as smartphones, wearables, or social media, to measure behavior or other health indicators. The collected data are analyzed to generate moment-by-moment quantification of a person's mental state and potentially predict future mental states. Digital phenotyping projects incorporate data from multiple sources, such as electronic health records, biometric scans, or genetic testing. As digital phenotyping tools can be used to study and predict behavior, they are of increasing interest for a range of consumer, government, and health care applications. In clinical care, digital phenotyping is expected to improve mental health diagnoses and treatment. At the same time, mental health applications of digital phenotyping present significant areas of ethical concern, particularly in terms of privacy and data protection, consent, bias, and accountability.OBJECTIVE: This study aims to develop consensus statements regarding key areas of ethical guidance for mental health applications of digital phenotyping in the United States.METHODS: We used a modified Delphi technique to identify the emerging ethical challenges posed by digital phenotyping for mental health applications and to formulate guidance for addressing these challenges. Experts in digital phenotyping, data science, mental health, law, and ethics participated as panelists in the study. The panel arrived at consensus recommendations through an iterative process involving interviews and surveys. The panelists focused primarily on clinical applications for digital phenotyping for mental health but also included recommendations regarding transparency and data protection to address potential areas of misuse of digital phenotyping data outside of the health care domain.RESULTS: The findings of this study showed strong agreement related to these ethical issues in the development of mental health applications of digital phenotyping: privacy, transparency, consent, accountability, and fairness. Consensus regarding the recommendation statements was strongest when the guidance was stated broadly enough to accommodate a range of potential applications. The privacy and data protection issues that the Delphi participants found particularly critical to address related to the perceived inadequacies of current regulations and frameworks for protecting sensitive personal information and the potential for sale and analysis of personal data outside of health systems.CONCLUSIONS: The Delphi study found agreement on a number of ethical issues to prioritize in the development of digital phenotyping for mental health applications. The Delphi consensus statements identified general recommendations and principles regarding the ethical application of digital phenotyping to mental health. As digital phenotyping for mental health is implemented in clinical care, there remains a need for empirical research and consultation with relevant stakeholders to further understand and address relevant ethical issues.
View details for DOI 10.2196/27343
View details for PubMedID 34319252
-
Dimensions of Research-Participant Interaction: Engagement is Not a Replacement for Consent.
The Journal of law, medicine & ethics : a journal of the American Society of Law, Medicine & Ethics
2020; 48 (1): 183–84
View details for DOI 10.1177/1073110520917008
View details for PubMedID 32342787
-
What Are Important Ethical Implications of Using Facial Recognition Technology in Health Care?
AMA journal of ethics
2019; 21 (2): E180–187
Abstract
Applications of facial recognition technology (FRT) in health care settings have been developed to identify and monitor patients as well as to diagnose genetic, medical, and behavioral conditions. The use of FRT in health care suggests the importance of informed consent, data input and analysis quality, effective communication about incidental findings, and potential influence on patient-clinician relationships. Privacy and data protection are thought to present challenges for the use of FRT for health applications.
View details for DOI 10.1001/amajethics.2019.180
View details for PubMedID 30794128
-
Data mining for health: staking out the ethical territory of digital phenotyping
NPJ DIGITAL MEDICINE
2018; 1
View details for DOI 10.1038/s41746-018-0075-8
View details for Web of Science ID 000453910600001
-
Is It Ethical to Use Prognostic Estimates from Machine Learning to Treat Psychosis?
AMA journal of ethics
2018; 20 (9): E804–811
Abstract
Machine learning is a method for predicting clinically relevant variables, such as opportunities for early intervention, potential treatment response, prognosis, and health outcomes. This commentary examines the following ethical questions about machine learning in a case of a patient with new onset psychosis: (1) When is clinical innovation ethically acceptable? (2) How should clinicians communicate with patients about the ethical issues raised by a machine learning predictive model?
View details for PubMedID 30242810
-
Surveillance and Digital Health.
The American journal of bioethics : AJOB
2018; 18 (9): 67–68
View details for PubMedID 30235099
-
Ethical Issues for Direct-to-Consumer Digital Psychotherapy Apps: Addressing Accountability, Data Protection, and Consent
JMIR MENTAL HEALTH
2018; 5 (2): e32
Abstract
This paper focuses on the ethical challenges presented by direct-to-consumer (DTC) digital psychotherapy services that do not involve oversight by a professional mental health provider. DTC digital psychotherapy services can potentially assist in improving access to mental health care for the many people who would otherwise not have the resources or ability to connect with a therapist. However, the lack of adequate regulation in this area exacerbates concerns over how safety, privacy, accountability, and other ethical obligations to protect an individual in therapy are addressed within these services. In the traditional therapeutic relationship, there are ethical obligations that serve to protect the interests of the client and provide warnings. In contrast, in a DTC therapy app, there are no clear lines of accountability or associated ethical obligations to protect the user seeking mental health services. The types of DTC services that present ethical challenges include apps that use a digital platform to connect users to minimally trained nonprofessional counselors, as well as services that provide counseling steered by artificial intelligence and conversational agents. There is a need for adequate oversight of DTC nonprofessional psychotherapy services and additional empirical research to inform policy that will provide protection to the consumer.
View details for DOI 10.2196/mental.9423
View details for Web of Science ID 000430917500002
View details for PubMedID 29685865
View details for PubMedCentralID PMC5938696