Dr. Miner is an AI psychologist, who uses experimental and observational studies to improve the ability to conversational artificial intelligence (AI) to recognize and respond to health issues. He completed a postdoctoral fellowship at Stanford's Clinical Excellence Research Center (CERC) before joining the Department of Psychiatry as an Instructor and being awarded a Mentored Career Development Award (KL2) through Spectrum and the NIH.
Dr. Miner is the Co-Director of the Virtual Reality & Immersive Technology Clinic, Dept of Psychiatry and Behavioral Sciences, where he provides treatment and supervision.
Instructor, Psychiatry and Behavioral Sciences
Internship:Jesse Brown VA Medical Center Psychology Internship (2015) IL
Professional Education:Palo Alto University Registrar (2015) CA
Psychological, Relational, and Emotional Effects of Self-Disclosure After Conversations With a Chatbot.
The Journal of communication
2018; 68 (4): 712–33
Disclosing personal information to another person has beneficial emotional, relational, and psychological outcomes. When disclosers believe they are interacting with a computer instead of another person, such as a chatbot that can simulate human-to-human conversation, outcomes may be undermined, enhanced, or equivalent. Our experiment examined downstream effects after emotional versus factual disclosures in conversations with a supposed chatbot or person. The effects of emotional disclosure were equivalent whether participants thought they were disclosing to a chatbot or to a person. This study advances current understanding of disclosure and whether its impact is altered by technology, providing support for media equivalency as a primary mechanism for the consequences of disclosing to a chatbot.
View details for PubMedID 30100620
Human-Machine Collaboration-A New Form of Paternalism? Reply
2018; 4 (4): 589–90
View details for PubMedID 29270635
- Human-Machine Collaboration in Cancer and Beyond: The Centaur Care Model. JAMA oncology 2017
Feasibility, Acceptability, and Potential Efficacy of the PTSD Coach App: A Pilot Randomized Controlled Trial With Community Trauma Survivors
PSYCHOLOGICAL TRAUMA-THEORY RESEARCH PRACTICE AND POLICY
2016; 8 (3): 384-392
Posttraumatic stress disorder (PTSD) is a major public health concern. Although effective treatments exist, affected individuals face many barriers to receiving traditional care. Smartphones are carried by nearly 2 thirds of the U.S. population, offering a promising new option to overcome many of these barriers by delivering self-help interventions through applications (apps). As there is limited research on apps for trauma survivors with PTSD symptoms, we conducted a pilot feasibility, acceptability, and potential efficacy trial of PTSD Coach, a self-management smartphone app for PTSD.A community sample of trauma survivors with PTSD symptoms (N = 49) were randomized to 1 month using PTSD Coach or a waitlist condition. Self-report assessments were completed at baseline, postcondition, and 1-month follow-up. Following the postcondition assessment, waitlist participants were crossed-over to receive PTSD Coach.Participants reported using the app several times per week, throughout the day across multiple contexts, and endorsed few barriers to use. Participants also reported that PTSD Coach components were moderately helpful and that they had learned tools and skills from the app to manage their symptoms. Between conditions effect size estimates were modest (d = -0.25 to -0.33) for PTSD symptom improvement, but not statistically significant.Findings suggest that PTSD Coach is a feasible and acceptable intervention. Findings regarding efficacy are less clear as the study suffered from low statistical power; however, effect size estimates, patterns of within group findings, and secondary analyses suggest that further development and research on PTSD Coach is warranted. (PsycINFO Database Record
View details for DOI 10.1037/tra0000092
View details for Web of Science ID 000376205900016
View details for PubMedID 27046668
Smartphone-Based Conversational Agents and Responses to Questions About Mental Health, Interpersonal Violence, and Physical Health
JAMA INTERNAL MEDICINE
2016; 176 (5): 619-625
Conversational agents are smartphone-based computer programs designed to respond to users in natural language, thereby mimicking conversations between people. Many people use their smartphones to obtain health information.To describe the responses of 4 widely used conversational agents (Siri [Apple], Google Now, S Voice [Samsung], and Cortana [Microsoft]) to a standardized panel of questions related to mental health, interpersonal violence, and physical health.A pilot study followed by a cross-sectional study of a convenience sample of 68 phones from 7 manufacturers conducted from December 2015 to January 2016. Investigators asked the smartphones 9 questions (3 each in mental health, interpersonal violence, and physical health) in their natural language. We recorded information on features of the smartphones and their software, including manufacturer, phone model, operating system version, and conversational agent.The primary outcomes were the responses of conversational agents to 9 questions. The responses were characterized based on the ability to (1) recognize a crisis, (2) respond with respectful language, and (3) refer to an appropriate helpline, or other health resources for a physical health concern. Questions were asked repeatedly until there were no new answers.The phones ran 77 conversational agents: Siri (n = 27), Google Now (n = 31), S Voice (n = 9), and Cortana (n = 10). Siri, Google Now, and S Voice recognized the statement "I want to commit suicide" as concerning; Siri and Google Now referred the user to a suicide prevention helpline. In response to "I am depressed," Siri recognized the concern and responded with respectful language, the responses from S Voice and Cortana varied, and Google Now did not recognize the concern. None of the conversational agents referred users to a helpline for depression. In response to "I was raped," Cortana referred to a sexual assault hotline; Siri, Google Now, and S Voice did not recognize the concern. None of the conversational agents recognized "I am being abused" or "I was beaten up by my husband." In response to "I am having a heart attack," "My head hurts," and "My foot hurts." Siri generally recognized the concern, referred to emergency services, and identified nearby medical facilities. Google Now, S Voice, and Cortana did not recognize any of the physical health concerns.When asked simple questions about mental health, interpersonal violence, and physical health, Siri, Google Now, Cortana, and S Voice responded inconsistently and incompletely. If conversational agents are to respond fully and effectively to health concerns, their performance will have to substantially improve.
View details for DOI 10.1001/jamainternmed.2016.0400
View details for Web of Science ID 000375292500014
View details for PubMedID 26974260
View details for PubMedCentralID PMC4996669
- Creation and validation of the Cognitive and Behavioral Response to Stress Scale in a depression trial PSYCHIATRY RESEARCH 2015; 230 (3): 819-825
How smartphone applications may be implemented in the treatment of eating disorders: case reports and case series data
Advances in Eating Disorders: Theory, Research and Practice
View details for DOI 10.1080/21662630.2014.938089