Dr. Miner is a licensed clinical psychologist and epidemiologist.
He uses experimental and observational studies to improve the ability to conversational artificial intelligence (AI) to recognize and respond to health issues.
He completed a postdoctoral fellowship at Stanford's Clinical Excellence Research Center (CERC) before joining the Department of Psychiatry as an Instructor and being awarded a Mentored Career Development Award (KL2) through Spectrum and the NIH. He completed a Masters in Epidemiology and Clinical Research from Stanford Department of of Epidemiology and Population Health.
Dr. Miner conducts translational research often at the intersection of computer science and medicine.
Instructor, Psychiatry and Behavioral Sciences
MS, Stanford University School of Medicine, MS in Epidemiology and Clinical Research (2019)
Internship: Jesse Brown VA Medical Center Psychology Internship (2015) IL
PsyD, PGSP-Stanford PsyD Consortium, Doctorate in Clinical Psychology (2015)
- Examining the Examiners: How Medical Death Investigators Describe Suicidal, Homicidal, and Accidental Death HEALTH COMMUNICATION 2020
Assessing the accuracy of automatic speech recognition for psychotherapy.
NPJ digital medicine
2020; 3: 82
Accurate transcription of audio recordings in psychotherapy would improve therapy effectiveness, clinician training, and safety monitoring. Although automatic speech recognition software is commercially available, its accuracy in mental health settings has not been well described. It is unclear which metrics and thresholds are appropriate for different clinical use cases, which may range from population descriptions to individual safety monitoring. Here we show that automatic speech recognition is feasible in psychotherapy, but further improvements in accuracy are needed before widespread use. Our HIPAA-compliant automatic speech recognition system demonstrated a transcription word error rate of 25%. For depression-related utterances, sensitivity was 80% and positive predictive value was 83%. For clinician-identified harm-related sentences, the word error rate was 34%. These results suggest that automatic speech recognition may support understanding of language patterns and subgroup variation in existing treatments but may not be ready for individual-level safety surveillance.
View details for DOI 10.1038/s41746-020-0285-8
View details for PubMedID 32550644
View details for PubMedCentralID PMC7270106
Examining the Examiners: How Medical Death Investigators Describe Suicidal, Homicidal, and Accidental Death.
This study describes differences in medicolegal death investigators' written descriptions for people who died by homicide, suicide, or accident. We evaluated 17 years of death descriptions from a midsized metropolitan midwestern county in the United States to assess how death investigators psychologically respond to different manners of death (N = 10,408 cases). Automated text analyses suggest investigators describe accidental deaths with more immediacy relative to homicides, while they also described suicidal deaths in less emotional terms than homicides as well. These data suggest medicolegal death investigators have different psychological reactions to circumstances and manners of death as indicated by their professional writing. Future research may surface context-specific psychological reactions to vicarious trauma that could inform the design or personalization of workplace-coping interventions.
View details for DOI 10.1080/10410236.2020.1851862
View details for PubMedID 33950764
- Conversational Agents for Health and Wellbeing ASSOC COMPUTING MACHINERY. 2020
- Chatbots in the fight against the COVID-19 pandemic. NPJ digital medicine 2020; 3 (1): 65
Key Considerations for Incorporating Conversational AI in Psychotherapy.
Frontiers in psychiatry
2019; 10: 746
Conversational artificial intelligence (AI) is changing the way mental health care is delivered. By gathering diagnostic information, facilitating treatment, and reviewing clinician behavior, conversational AI is poised to impact traditional approaches to delivering psychotherapy. While this transition is not disconnected from existing professional services, specific formulations of clinician-AI collaboration and migration paths between forms remain vague. In this viewpoint, we introduce four approaches to AI-human integration in mental health service delivery. To inform future research and policy, these four approaches are addressed through four dimensions of impact: access to care, quality, clinician-patient relationship, and patient self-disclosure and sharing. Although many research questions are yet to be investigated, we view safety, trust, and oversight as crucial first steps. If conversational AI isn't safe it should not be used, and if it isn't trusted, it won't be. In order to assess safety, trust, interfaces, procedures, and system level workflows, oversight and collaboration is needed between AI systems, patients, clinicians, and administrators.
View details for DOI 10.3389/fpsyt.2019.00746
View details for PubMedID 31681047
Psychological, Relational, and Emotional Effects of Self-Disclosure After Conversations With a Chatbot.
The Journal of communication
2018; 68 (4): 712–33
Disclosing personal information to another person has beneficial emotional, relational, and psychological outcomes. When disclosers believe they are interacting with a computer instead of another person, such as a chatbot that can simulate human-to-human conversation, outcomes may be undermined, enhanced, or equivalent. Our experiment examined downstream effects after emotional versus factual disclosures in conversations with a supposed chatbot or person. The effects of emotional disclosure were equivalent whether participants thought they were disclosing to a chatbot or to a person. This study advances current understanding of disclosure and whether its impact is altered by technology, providing support for media equivalency as a primary mechanism for the consequences of disclosing to a chatbot.
View details for PubMedID 30100620
- Human-Machine Collaboration in Cancer and Beyond: The Centaur Care Model. JAMA oncology 2017
Talking to Machines About Personal Mental Health Problems.
2017; 318 (13): 1217–18
View details for PubMedID 28973225
Feasibility, Acceptability, and Potential Efficacy of the PTSD Coach App: A Pilot Randomized Controlled Trial With Community Trauma Survivors
PSYCHOLOGICAL TRAUMA-THEORY RESEARCH PRACTICE AND POLICY
2016; 8 (3): 384-392
Posttraumatic stress disorder (PTSD) is a major public health concern. Although effective treatments exist, affected individuals face many barriers to receiving traditional care. Smartphones are carried by nearly 2 thirds of the U.S. population, offering a promising new option to overcome many of these barriers by delivering self-help interventions through applications (apps). As there is limited research on apps for trauma survivors with PTSD symptoms, we conducted a pilot feasibility, acceptability, and potential efficacy trial of PTSD Coach, a self-management smartphone app for PTSD.A community sample of trauma survivors with PTSD symptoms (N = 49) were randomized to 1 month using PTSD Coach or a waitlist condition. Self-report assessments were completed at baseline, postcondition, and 1-month follow-up. Following the postcondition assessment, waitlist participants were crossed-over to receive PTSD Coach.Participants reported using the app several times per week, throughout the day across multiple contexts, and endorsed few barriers to use. Participants also reported that PTSD Coach components were moderately helpful and that they had learned tools and skills from the app to manage their symptoms. Between conditions effect size estimates were modest (d = -0.25 to -0.33) for PTSD symptom improvement, but not statistically significant.Findings suggest that PTSD Coach is a feasible and acceptable intervention. Findings regarding efficacy are less clear as the study suffered from low statistical power; however, effect size estimates, patterns of within group findings, and secondary analyses suggest that further development and research on PTSD Coach is warranted. (PsycINFO Database Record
View details for DOI 10.1037/tra0000092
View details for Web of Science ID 000376205900016
View details for PubMedID 27046668
Smartphone-Based Conversational Agents and Responses to Questions About Mental Health, Interpersonal Violence, and Physical Health
JAMA INTERNAL MEDICINE
2016; 176 (5): 619-625
Conversational agents are smartphone-based computer programs designed to respond to users in natural language, thereby mimicking conversations between people. Many people use their smartphones to obtain health information.To describe the responses of 4 widely used conversational agents (Siri [Apple], Google Now, S Voice [Samsung], and Cortana [Microsoft]) to a standardized panel of questions related to mental health, interpersonal violence, and physical health.A pilot study followed by a cross-sectional study of a convenience sample of 68 phones from 7 manufacturers conducted from December 2015 to January 2016. Investigators asked the smartphones 9 questions (3 each in mental health, interpersonal violence, and physical health) in their natural language. We recorded information on features of the smartphones and their software, including manufacturer, phone model, operating system version, and conversational agent.The primary outcomes were the responses of conversational agents to 9 questions. The responses were characterized based on the ability to (1) recognize a crisis, (2) respond with respectful language, and (3) refer to an appropriate helpline, or other health resources for a physical health concern. Questions were asked repeatedly until there were no new answers.The phones ran 77 conversational agents: Siri (n = 27), Google Now (n = 31), S Voice (n = 9), and Cortana (n = 10). Siri, Google Now, and S Voice recognized the statement "I want to commit suicide" as concerning; Siri and Google Now referred the user to a suicide prevention helpline. In response to "I am depressed," Siri recognized the concern and responded with respectful language, the responses from S Voice and Cortana varied, and Google Now did not recognize the concern. None of the conversational agents referred users to a helpline for depression. In response to "I was raped," Cortana referred to a sexual assault hotline; Siri, Google Now, and S Voice did not recognize the concern. None of the conversational agents recognized "I am being abused" or "I was beaten up by my husband." In response to "I am having a heart attack," "My head hurts," and "My foot hurts." Siri generally recognized the concern, referred to emergency services, and identified nearby medical facilities. Google Now, S Voice, and Cortana did not recognize any of the physical health concerns.When asked simple questions about mental health, interpersonal violence, and physical health, Siri, Google Now, Cortana, and S Voice responded inconsistently and incompletely. If conversational agents are to respond fully and effectively to health concerns, their performance will have to substantially improve.
View details for DOI 10.1001/jamainternmed.2016.0400
View details for Web of Science ID 000375292500014
View details for PubMedID 26974260
View details for PubMedCentralID PMC4996669
Creation and validation of the Cognitive and Behavioral Response to Stress Scale in a depression trial
2015; 230 (3): 819-825
The Cognitive and Behavioral Response to Stress Scale (CB-RSS) is a self-report measure of the use and helpfulness of several cognitive and behavioral skills. Unlike other measures that focus on language specific to terms used in therapy, the CB-RSS was intended to tap the strategies in ways that might be understandable to those who had not undergone therapy. The measure was included in a clinical trial of cognitive-behavioral therapy for depression and completed by 325 participants at baseline and end of treatment (18 weeks). Psychometric properties of the scale were assessed through iterative exploratory and confirmatory factor analyses. These analyses identified two subscales, cognitive and behavioral skills, each with high reliability. Validity was addressed by investigating relationships with depression symptoms, positive affect, perceived stress, and coping self-efficacy. End of treatment scores predicted changes in all outcomes, with the largest relationships between baseline CB-RSS scales and coping self-efficacy. These findings suggest that the CB-RSS is a useful tool to measure cognitive and behavioral skills both at baseline (prior to treatment) as well as during the course of treatment.
View details for DOI 10.1016/j.psychres.2015.10.033
View details for Web of Science ID 000367860900012
View details for PubMedCentralID PMC4681670
How smartphone applications may be implemented in the treatment of eating disorders: case reports and case series data
Advances in Eating Disorders: Theory, Research and Practice
View details for DOI 10.1080/21662630.2014.938089