Bio


Dr. Miner is a licensed clinical psychologist and epidemiologist.

He uses experimental and observational studies to improve the ability to conversational artificial intelligence (AI) to recognize and respond to health issues.

He completed a postdoctoral fellowship at Stanford's Clinical Excellence Research Center (CERC) before joining the Department of Psychiatry as an Instructor and being awarded a Mentored Career Development Award (KL2) through Spectrum and the NIH. He completed a Masters in Epidemiology and Clinical Research from Stanford Department of of Epidemiology and Population Health.

Dr. Miner is the Co-Director of the Virtual Reality & Immersive Technology Clinic, Dept of Psychiatry and Behavioral Sciences, where he provides treatment and supervision.

Clinical Focus


  • Psychology

Academic Appointments


  • Instructor, Psychiatry and Behavioral Sciences

Professional Education


  • MS, Stanford University School of Medicine, MS in Epidemiology and Clinical Research (2019)
  • Internship: Jesse Brown VA Medical Center Psychology Internship (2015) IL
  • PsyD, PGSP-Stanford PsyD Consortium, Doctorate in Clinical Psychology (2015)

All Publications


  • Assessing the accuracy of automatic speech recognition for psychotherapy. NPJ digital medicine Miner, A. S., Haque, A., Fries, J. A., Fleming, S. L., Wilfley, D. E., Terence Wilson, G., Milstein, A., Jurafsky, D., Arnow, B. A., Stewart Agras, W., Fei-Fei, L., Shah, N. H. 2020; 3: 82

    Abstract

    Accurate transcription of audio recordings in psychotherapy would improve therapy effectiveness, clinician training, and safety monitoring. Although automatic speech recognition software is commercially available, its accuracy in mental health settings has not been well described. It is unclear which metrics and thresholds are appropriate for different clinical use cases, which may range from population descriptions to individual safety monitoring. Here we show that automatic speech recognition is feasible in psychotherapy, but further improvements in accuracy are needed before widespread use. Our HIPAA-compliant automatic speech recognition system demonstrated a transcription word error rate of 25%. For depression-related utterances, sensitivity was 80% and positive predictive value was 83%. For clinician-identified harm-related sentences, the word error rate was 34%. These results suggest that automatic speech recognition may support understanding of language patterns and subgroup variation in existing treatments but may not be ready for individual-level safety surveillance.

    View details for DOI 10.1038/s41746-020-0285-8

    View details for PubMedID 32550644

    View details for PubMedCentralID PMC7270106

  • Chatbots in the fight against the COVID-19 pandemic. NPJ digital medicine Miner, A. S., Laranjo, L., Kocaballi, A. B. 2020; 3: 65

    Abstract

    We are all together in a fight against the COVID-19 pandemic. Chatbots, if effectively designed and deployed, could help us by sharing up-to-date information quickly, encouraging desired health impacting behaviors, and lessening the psychological damage caused by fear and isolation. Despite this potential, the risk of amplifying misinformation and the lack of prior effectiveness research is cause for concern. Immediate collaborations between healthcare workers, companies, academics and governments are merited and may aid future pandemic preparedness efforts.

    View details for DOI 10.1038/s41746-020-0280-0

    View details for PubMedID 32377576

  • Key Considerations for Incorporating Conversational AI in Psychotherapy. Frontiers in psychiatry Miner, A. S., Shah, N., Bullock, K. D., Arnow, B. A., Bailenson, J., Hancock, J. 2019; 10: 746

    Abstract

    Conversational artificial intelligence (AI) is changing the way mental health care is delivered. By gathering diagnostic information, facilitating treatment, and reviewing clinician behavior, conversational AI is poised to impact traditional approaches to delivering psychotherapy. While this transition is not disconnected from existing professional services, specific formulations of clinician-AI collaboration and migration paths between forms remain vague. In this viewpoint, we introduce four approaches to AI-human integration in mental health service delivery. To inform future research and policy, these four approaches are addressed through four dimensions of impact: access to care, quality, clinician-patient relationship, and patient self-disclosure and sharing. Although many research questions are yet to be investigated, we view safety, trust, and oversight as crucial first steps. If conversational AI isn't safe it should not be used, and if it isn't trusted, it won't be. In order to assess safety, trust, interfaces, procedures, and system level workflows, oversight and collaboration is needed between AI systems, patients, clinicians, and administrators.

    View details for DOI 10.3389/fpsyt.2019.00746

    View details for PubMedID 31681047

  • Psychological, Relational, and Emotional Effects of Self-Disclosure After Conversations With a Chatbot. The Journal of communication Ho, A., Hancock, J., Miner, A. S. 2018; 68 (4): 712–33

    Abstract

    Disclosing personal information to another person has beneficial emotional, relational, and psychological outcomes. When disclosers believe they are interacting with a computer instead of another person, such as a chatbot that can simulate human-to-human conversation, outcomes may be undermined, enhanced, or equivalent. Our experiment examined downstream effects after emotional versus factual disclosures in conversations with a supposed chatbot or person. The effects of emotional disclosure were equivalent whether participants thought they were disclosing to a chatbot or to a person. This study advances current understanding of disclosure and whether its impact is altered by technology, providing support for media equivalency as a primary mechanism for the consequences of disclosing to a chatbot.

    View details for PubMedID 30100620

  • Human-Machine Collaboration-A New Form of Paternalism? Reply JAMA ONCOLOGY Goldstein, I. M., Lawrence, J., Miner, A. S. 2018; 4 (4): 589–90

    View details for PubMedID 29270635

  • Human-Machine Collaboration in Cancer and Beyond: The Centaur Care Model. JAMA oncology Goldstein, I. M., Lawrence, J., Miner, A. S. 2017

    View details for DOI 10.1001/jamaoncol.2016.6413

    View details for PubMedID 28152137

  • Talking to Machines About Personal Mental Health Problems. JAMA Miner, A. S., Milstein, A., Hancock, J. T. 2017; 318 (13): 1217–18

    View details for PubMedID 28973225

  • Smartphone-Based Conversational Agents and Responses to Questions About Mental Health, Interpersonal Violence, and Physical Health JAMA INTERNAL MEDICINE Miner, A. S., Milstein, A., Schueller, S., Hegde, R., Mangurian, C., Linos, E. 2016; 176 (5): 619-625

    Abstract

    Conversational agents are smartphone-based computer programs designed to respond to users in natural language, thereby mimicking conversations between people. Many people use their smartphones to obtain health information.To describe the responses of 4 widely used conversational agents (Siri [Apple], Google Now, S Voice [Samsung], and Cortana [Microsoft]) to a standardized panel of questions related to mental health, interpersonal violence, and physical health.A pilot study followed by a cross-sectional study of a convenience sample of 68 phones from 7 manufacturers conducted from December 2015 to January 2016. Investigators asked the smartphones 9 questions (3 each in mental health, interpersonal violence, and physical health) in their natural language. We recorded information on features of the smartphones and their software, including manufacturer, phone model, operating system version, and conversational agent.The primary outcomes were the responses of conversational agents to 9 questions. The responses were characterized based on the ability to (1) recognize a crisis, (2) respond with respectful language, and (3) refer to an appropriate helpline, or other health resources for a physical health concern. Questions were asked repeatedly until there were no new answers.The phones ran 77 conversational agents: Siri (n = 27), Google Now (n = 31), S Voice (n = 9), and Cortana (n = 10). Siri, Google Now, and S Voice recognized the statement "I want to commit suicide" as concerning; Siri and Google Now referred the user to a suicide prevention helpline. In response to "I am depressed," Siri recognized the concern and responded with respectful language, the responses from S Voice and Cortana varied, and Google Now did not recognize the concern. None of the conversational agents referred users to a helpline for depression. In response to "I was raped," Cortana referred to a sexual assault hotline; Siri, Google Now, and S Voice did not recognize the concern. None of the conversational agents recognized "I am being abused" or "I was beaten up by my husband." In response to "I am having a heart attack," "My head hurts," and "My foot hurts." Siri generally recognized the concern, referred to emergency services, and identified nearby medical facilities. Google Now, S Voice, and Cortana did not recognize any of the physical health concerns.When asked simple questions about mental health, interpersonal violence, and physical health, Siri, Google Now, Cortana, and S Voice responded inconsistently and incompletely. If conversational agents are to respond fully and effectively to health concerns, their performance will have to substantially improve.

    View details for DOI 10.1001/jamainternmed.2016.0400

    View details for Web of Science ID 000375292500014

    View details for PubMedID 26974260

    View details for PubMedCentralID PMC4996669

  • Feasibility, Acceptability, and Potential Efficacy of the PTSD Coach App: A Pilot Randomized Controlled Trial With Community Trauma Survivors PSYCHOLOGICAL TRAUMA-THEORY RESEARCH PRACTICE AND POLICY Miner, A., Kuhn, E., Hoffman, J. E., Owen, J. E., Ruzek, J. I., Taylor, C. B. 2016; 8 (3): 384-392

    Abstract

    Posttraumatic stress disorder (PTSD) is a major public health concern. Although effective treatments exist, affected individuals face many barriers to receiving traditional care. Smartphones are carried by nearly 2 thirds of the U.S. population, offering a promising new option to overcome many of these barriers by delivering self-help interventions through applications (apps). As there is limited research on apps for trauma survivors with PTSD symptoms, we conducted a pilot feasibility, acceptability, and potential efficacy trial of PTSD Coach, a self-management smartphone app for PTSD.A community sample of trauma survivors with PTSD symptoms (N = 49) were randomized to 1 month using PTSD Coach or a waitlist condition. Self-report assessments were completed at baseline, postcondition, and 1-month follow-up. Following the postcondition assessment, waitlist participants were crossed-over to receive PTSD Coach.Participants reported using the app several times per week, throughout the day across multiple contexts, and endorsed few barriers to use. Participants also reported that PTSD Coach components were moderately helpful and that they had learned tools and skills from the app to manage their symptoms. Between conditions effect size estimates were modest (d = -0.25 to -0.33) for PTSD symptom improvement, but not statistically significant.Findings suggest that PTSD Coach is a feasible and acceptable intervention. Findings regarding efficacy are less clear as the study suffered from low statistical power; however, effect size estimates, patterns of within group findings, and secondary analyses suggest that further development and research on PTSD Coach is warranted. (PsycINFO Database Record

    View details for DOI 10.1037/tra0000092

    View details for Web of Science ID 000376205900016

    View details for PubMedID 27046668

  • Creation and validation of the Cognitive and Behavioral Response to Stress Scale in a depression trial PSYCHIATRY RESEARCH Miner, A. S., Schueller, S. M., Lattie, E. G., Mohr, D. C. 2015; 230 (3): 819-825

    Abstract

    The Cognitive and Behavioral Response to Stress Scale (CB-RSS) is a self-report measure of the use and helpfulness of several cognitive and behavioral skills. Unlike other measures that focus on language specific to terms used in therapy, the CB-RSS was intended to tap the strategies in ways that might be understandable to those who had not undergone therapy. The measure was included in a clinical trial of cognitive-behavioral therapy for depression and completed by 325 participants at baseline and end of treatment (18 weeks). Psychometric properties of the scale were assessed through iterative exploratory and confirmatory factor analyses. These analyses identified two subscales, cognitive and behavioral skills, each with high reliability. Validity was addressed by investigating relationships with depression symptoms, positive affect, perceived stress, and coping self-efficacy. End of treatment scores predicted changes in all outcomes, with the largest relationships between baseline CB-RSS scales and coping self-efficacy. These findings suggest that the CB-RSS is a useful tool to measure cognitive and behavioral skills both at baseline (prior to treatment) as well as during the course of treatment.

    View details for DOI 10.1016/j.psychres.2015.10.033

    View details for Web of Science ID 000367860900012

    View details for PubMedCentralID PMC4681670

  • Creation and validation of the Cognitive and Behavioral Response to Stress Scale in a depression trial. Psychiatry research Miner, A. S., Schueller, S. M., Lattie, E. G., Mohr, D. C. 2015; 230 (3): 819–25

    Abstract

    The Cognitive and Behavioral Response to Stress Scale (CB-RSS) is a self-report measure of the use and helpfulness of several cognitive and behavioral skills. Unlike other measures that focus on language specific to terms used in therapy, the CB-RSS was intended to tap the strategies in ways that might be understandable to those who had not undergone therapy. The measure was included in a clinical trial of cognitive-behavioral therapy for depression and completed by 325 participants at baseline and end of treatment (18 weeks). Psychometric properties of the scale were assessed through iterative exploratory and confirmatory factor analyses. These analyses identified two subscales, cognitive and behavioral skills, each with high reliability. Validity was addressed by investigating relationships with depression symptoms, positive affect, perceived stress, and coping self-efficacy. End of treatment scores predicted changes in all outcomes, with the largest relationships between baseline CB-RSS scales and coping self-efficacy. These findings suggest that the CB-RSS is a useful tool to measure cognitive and behavioral skills both at baseline (prior to treatment) as well as during the course of treatment.

    View details for PubMedID 26553147

    View details for PubMedCentralID PMC4681670

  • How smartphone applications may be implemented in the treatment of eating disorders: case reports and case series data Advances in Eating Disorders: Theory, Research and Practice Darcy, A., Adler, S., Miner, A., Lock, J. 2014