Bio


Vijoy Abraham is Assistant Director and Head of the Center for Interdisciplinary Digital Research where he rallies together a team of social science and humanities scholars in support of digital research projects.

He also works to analyze and implement new services for provisioning Stanford Libraries’ data collections to our research community through working relationships with colleagues within the Libraries and across the University’s research and data infrastructure teams.

His background includes research work in cognitive psychology, cognitive neuroscience, medical informatics and computational social science.

Current Role at Stanford


Assistant Director and Head, Center for Interdisciplinary Digital Research
Stanford Libraries

All Publications


  • A perspective on computational research support programs in the library: More than 20 years of data from Stanford University Libraries Journal of Librariahship and Information Science Muzzall, E., Abraham, V., Nakao, R. 2022
  • A NEW TOOL FOR COMPUTER ASSISTED PALEOGRAPHY: THE DIGITAL ANALYSIS OF SYRIAC HANDWRITING PROJECT Journal of Syriac Studies Penn, M., Abraham, V., Bailey, S., Broadwell, P., Crouser, R., De La Rosa, J., Howe, N., Wiles, S. 2021; 24 (1): 35-52
  • Predicting dire outcomes of patients with community acquired pneumonia JOURNAL OF BIOMEDICAL INFORMATICS Cooper, G. F., Abraham, V., Aliferis, C. F., Aronis, J. M., Buchanan, B. G., Caruana, R., Fine, M. J., Janosky, J. E., Livingston, G., Mitchell, T., Monti, S., Spirtes, P. 2005; 38 (5): 347-366

    Abstract

    Community-acquired pneumonia (CAP) is an important clinical condition with regard to patient mortality, patient morbidity, and healthcare resource utilization. The assessment of the likely clinical course of a CAP patient can significantly influence decision making about whether to treat the patient as an inpatient or as an outpatient. That decision can in turn influence resource utilization, as well as patient well being. Predicting dire outcomes, such as mortality or severe clinical complications, is a particularly important component in assessing the clinical course of patients. We used a training set of 1601 CAP patient cases to construct 11 statistical and machine-learning models that predict dire outcomes. We evaluated the resulting models on 686 additional CAP-patient cases. The primary goal was not to compare these learning algorithms as a study end point; rather, it was to develop the best model possible to predict dire outcomes. A special version of an artificial neural network (NN) model predicted dire outcomes the best. Using the 686 test cases, we estimated the expected healthcare quality and cost impact of applying the NN model in practice. The particular, quantitative results of this analysis are based on a number of assumptions that we make explicit; they will require further study and validation. Nonetheless, the general implication of the analysis seems robust, namely, that even small improvements in predictive performance for prevalent and costly diseases, such as CAP, are likely to result in significant improvements in the quality and efficiency of healthcare delivery. Therefore, seeking models with the highest possible level of predictive performance is important. Consequently, seeking ever better machine-learning and statistical modeling methods is of great practical significance.

    View details for DOI 10.1016/j.jbi.2005.02.005

    View details for Web of Science ID 000232738600002

    View details for PubMedID 16198995

  • Scoring performance on computer-based patient simulations: beyond value of information. Proceedings. AMIA Symposium Downs, S. M., Marasigan, F., Abraham, V., Wildemuth, B., Friedman, C. P. 1999: 520-4

    Abstract

    As computer based clinical case simulations become increasingly popular for training and evaluating clinicians, approaches are needed to evaluate a trainee's or examinee's solution of the simulated cases. In 1997 we developed a decision analytic approach to scoring performance on computerized patient case simulations, using expected value of information (VOI) to generate a score each time the user requested clinical information from the simulation. Although this measure has many desirable characteristics, we found that the VOI was zero for the majority of information requests. We enhanced our original algorithm to measure potential decrements in expected utility that could result from using results of information requests that have zero VOI. Like the original algorithm, the new approach uses decision models, represented as influence diagrams, to represent the diagnostic problem. The process of solving computer based patient simulations involves repeated cycles of requesting and receiving these data from the simulations. Each time the user requests clinical data from the simulation, the influence diagram is evaluated to determine the expected VOI of the requested clinical datum. The VOI is non-zero only it the requested datum has the potential to change the leading diagnosis. The VOI is zero when the data item requested does not map to any node in the influence diagram or when the item maps to a node but does not change the leading diagnosis regardless of it's value. Our new algorithm generates a score for each of these situations by modeling what would happen to the expected utility of the model if the user changes the leading diagnosis based on the results. The resulting algorithm produces a non-zero score for all information requests. The score is the VOI when the VOI is non-zero It is a negative number when the VOI is zero.

    View details for PubMedID 10566413

    View details for PubMedCentralID PMC2232774

  • Enhancement of clinicians' diagnostic reasoning by computer-based consultation - A multisite study of 2 systems JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION Friedman, C. P., Elstein, A. S., Wolf, F. M., Murphy, G. C., Franz, T. M., Heckerling, P. S., Fine, P. L., Miller, T. M., Abraham, V. 1999; 282 (19): 1851-1856

    Abstract

    Computer-based diagnostic decision support systems (DSSs) were developed to improve health care quality by providing accurate, useful, and timely diagnostic information to clinicians. However, most studies have emphasized the accuracy of the computer system alone, without placing clinicians in the role of direct users.To explore the extent to which consultations with DSSs improve clinicians' diagnostic hypotheses in a set of diagnostically challenging cases.Partially randomized controlled trial conducted in a laboratory setting, using a prospective balanced experimental design in 1995-1998.Three academic medical centers, none of which were involved in the development of the DSSs.A total of 216 physicians: 72 at each site, including 24 internal medicine faculty members, 24 senior residents, and 24 fourth-year medical students. One physician's data were lost to analysis.Two DSSs, ILIAD (version 4.2) and Quick Medical Reference (QMR; version 3.7.1), were used by participants for diagnostic evaluation of a total of 36 cases based on actual patients. After training, each subject evaluated 9 of the 36 cases, first without and then using a DSS, and suggested an ordered list of diagnostic hypotheses after each evaluation.Diagnostic accuracy, measured as the presence of the correct diagnosis on the hypothesis list and also using a derived diagnostic quality score, before and after consultation with the DSSs.Correct diagnoses appeared in subjects' hypothesis lists for 39.5% of cases prior to consultation and 45.4% of cases after consultation. Subjects' mean diagnostic quality scores increased from 5.7 (95% confidence interval [CI], 5.5-5.9) to 6.1 (95% CI, 5.9-6.3) (effect size: Cohen d = 0.32; 95% CI, 0.23-0.41; P<.001). Larger increases (P = .048) were observed for students than for residents and faculty. Effect size varied significantly (P<.02) by DSS (Cohen d = 0.20; 95% CI, 0.08-0.32 for ILIAD vs Cohen d = 0.45; 95% CI, 0.31-0.59 for QMR).Our study supports the idea that "hands-on" use of diagnostic DSSs can influence diagnostic reasoning of clinicians. The larger effect for students suggests a possible educational role for these systems.

    View details for Web of Science ID 000083615700029

    View details for PubMedID 10573277

  • Student and faculty performance in clinical simulations with access to a searchable information resource Annual Symposium of the American-Medical-Informatics-Association Abraham, V. A., Friedman, C. P., Wildemuth, B. M., Downs, S. M., Kantrowitz, P. J., Robinson, E. N. BMJ PUBLISHING GROUP. 1999: 648–652

    Abstract

    In this study we explore how students' use of an easily accessible and searchable database affects their performance in clinical simulations. We do this by comparing performance of students with and without database access and compare these to a sample of faculty members. The literature supports the fact that interactive information resources can augment a clinician's problem solving ability in small clinical vignettes. We have taken the INQUIRER bacteriological database, containing detailed information on 63 medically important bacteria in 33 structured fields, and incorporated it into a computer-based clinical simulation. Subjects worked through the case-based clinical simulations with some having access to the INQUIRER information resource. Performance metrics were based on correct determination of the etiologic agent in the simulation and crosstabulated with student access of the information resource; more specifically it was determined whether the student displayed the database record describing the etiologic agent. Chi-square tests show statistical significance for this relationship (chi 2 = 3.922; p = 0.048). Results support the idea that students with database access in a clinical simulation environment can perform at a higher level than their counterparts who lack access to such information, reflecting favorably on the use of information resources in training environments.

    View details for Web of Science ID 000170207300133

    View details for PubMedID 10566439