I am a fourth year clinical medical student at Stanford University School of Medicine. Here you will find out about my interests including a list of my publications and projects. I completed my doctoral research on training and evaluation of robotic surgical techniques with the Biorobotics Lab at the University of Washington in Spring 2013. I am a co-founder of C-SATS, Inc., a surgical performance assessment company that uses expert reviews and the wisdom of the crowd to train surgeons and medical practitioners.
Clinical Scholar, Urology
Engaging Housestaff as Informatics Collaborators: Educational and Operational Opportunities.
Applied clinical informatics
2021; 12 (5): 1150-1156
BACKGROUND: In academic hospitals, housestaff (interns, residents, and fellows) are a core user group of clinical information technology (IT) systems, yet are often relegated to being recipients of change, rather than active partners in system improvement. These information systems are an integral part of health care delivery and formal efforts to involve and educate housestaff are nascent.OBJECTIVE: This article develops a sustainable forum for effective engagement of housestaff in hospital informatics initiatives and creates opportunities for professional development.METHODS: A housestaff-led IT council was created within an academic medical center and integrated with informatics and graduate medical education leadership. The Council was designed to provide a venue for hands-on clinical informatics educational experiences to housestaff across all specialties.RESULTS: In the first year, five housestaff co-chairs and 50 members were recruited. More than 15 projects were completed with substantial improvements made to clinical systems impacting more than 1,300 housestaff and with touchpoints to nearly 3,000 staff members. Council leadership was integrally involved in hospital governance committees and became the go-to source for housestaff input on informatics efforts. Positive experiences informed members' career development toward informatics roles. Key lessons learned in building for success are discussed.CONCLUSION: The council model has effectively engaged housestaff as learners, local champions, and key informatics collaborators, with positive impact for the participating members and the institution. Requiring few resources for implementation, the model should be replicable at other institutions.
View details for DOI 10.1055/s-0041-1740258
View details for PubMedID 34879406
Removing Race from eGFR calculations: Implications for Urologic Care.
Equations estimating the glomerular filtration rate are important clinical tools in detecting and managing kidney disease. Urologists extensively use these equations in clinical decision making. For example, the estimated glomerular function rate is used when considering the type of urinary diversion following cystectomy, selecting systemic chemotherapy in managing urologic cancers, and deciding the type of cross-sectional imaging in diagnosing or staging urologic conditions. However, these equations, while widely accepted, are imprecise and adjust for race which is a social, not a biologic construct. The recent killings of unarmed Black Americans in the US have amplified the discussion of racism in healthcare and has prompted institutions to reconsider the role of race in eGFR equations and raced-based medicine. Urologist should be aware of the consequences of removing race from these equations, potential alternatives, and how these changes may affect Black patients receiving urologic care.
View details for DOI 10.1016/j.urology.2021.03.018
View details for PubMedID 33798557
Crowdsourcing to Assess Surgical Skill
2015; 150 (11): 1086–87
View details for PubMedID 26421369
Crowd-Sourced Assessment of Technical Skills: Differentiating Animate Surgical Skill Through the Wisdom of Crowds
JOURNAL OF ENDOUROLOGY
2015; 29 (10): 1183-1188
Objective quantification of surgical skill is imperative as we enter a healthcare environment of quality improvement and performance-based reimbursement. The gold standard tools are infrequently used due to time-intensiveness, cost inefficiency, and lack of standard practices. We hypothesized that valid performance scores of surgical skill can be obtained through crowdsourcing.Twelve surgeons of varying robotic surgical experience performed live porcine robot-assisted urinary bladder closures. Blinded video-recorded performances were scored by expert surgeon graders and by Amazon's Mechanical Turk crowdsourcing crowd workers using the Global Evaluative Assessment of Robotic Skills tool assessing five technical skills domains. Seven expert graders and 50 unique Mechanical Turkers (each paid $0.75/survey) evaluated each video. Global assessment scores were analyzed for correlation and agreement.Six hundred Mechanical Turkers completed the surveys in less than 5 hours, while seven surgeon graders took 14 days. The duration of video clips ranged from 2 to 11 minutes. The correlation coefficient between the Turkers' and expert graders' scores was 0.95 and Cronbach's Alpha was 0.93. Inter-rater reliability among the surgeon graders was 0.89.Crowdsourcing surgical skills assessment yielded rapid inexpensive agreement with global performance scores given by expert surgeon graders. The crowdsourcing method may provide surgical educators and medical institutions with a boundless number of procedural skills assessors to efficiently quantify technical skills for use in trainee advancement and hospital quality improvement.
View details for DOI 10.1089/end.2015.0104
View details for PubMedID 25867006
Crowd-sourced assessment of surgical skills in cricothyrotomy procedure
JOURNAL OF SURGICAL RESEARCH
2015; 196 (2): 302-306
Objective assessment of surgical skills is resource intensive and requires valuable time of expert surgeons. The goal of this study was to assess the ability of a large group of laypersons using a crowd-sourcing tool to grade a surgical procedure (cricothyrotomy) performed on a simulator. The grading included an assessment of the entire procedure by completing an objective assessment of technical skills survey.Two groups of graders were recruited as follows: (1) Amazon Mechanical Turk users and (2) three expert surgeons from University of Washington Department of Otolaryngology. Graders were presented with a video of participants performing the procedure on the simulator and were asked to grade the video using the objective assessment of technical skills questions. Mechanical Turk users were paid $0.50 for each completed survey. It took 10 h to obtain all responses from 30 Mechanical Turk users for 26 training participants (26 videos/tasks), whereas it took 60 d for three expert surgeons to complete the same 26 tasks.The assessment of surgical performance by a group (n = 30) of laypersons matched the assessment by a group (n = 3) of expert surgeons with a good level of agreement determined by Cronbach alpha coefficient = 0.83.We found crowd sourcing was an efficient, accurate, and inexpensive method for skills assessment with a good level of agreement to experts' grading.
View details for DOI 10.1016/j.jss.2015.03.018
View details for Web of Science ID 000355103700014
View details for PubMedID 25888499
Crowd-Sourced Assessment of Technical Skills: An Adjunct to Urology Resident Surgical Simulation Training
JOURNAL OF ENDOUROLOGY
2015; 29 (5): 604-609
Crowdsourcing is the practice of obtaining services from a large group of people, typically an online community. Validated methods of evaluating surgical video are time-intensive, expensive, and involve participation of multiple expert surgeons. We sought to obtain valid performance scores of urologic trainees and faculty on a dry-laboratory robotic surgery task module by using crowdsourcing through a web-based grading tool called Crowd Sourced Assessment of Technical Skill (CSATS).IRB approval was granted to test the technical skills grading accuracy of Amazon.com Mechanical Turk™ crowd-workers compared to three expert faculty surgeon graders. The two groups assessed dry-laboratory robotic surgical suturing performances of three urology residents (PGY-2, -4, -5) and two faculty using three performance domains from the validated Global Evaluative Assessment of Robotic Skills assessment tool.After an average of 2 hours 50 minutes, each of the five videos received 50 crowd-worker assessments. The inter-rater reliability (IRR) between the surgeons and crowd was 0.91 using Cronbach's alpha statistic (confidence intervals=0.20-0.92), indicating an agreement level between the two groups of "excellent." The crowds were able to discriminate the surgical level, and both the crowds and the expert faculty surgeon graders scored one senior trainee's performance above a faculty's performance.Surgery-naive crowd-workers can rapidly assess varying levels of surgical skill accurately relative to a panel of faculty raters. The crowds provided rapid feedback and were inexpensive. CSATS may be a valuable adjunct to surgical simulation training as requirements for more granular and iterative performance tracking of trainees become mandated and commonplace.
View details for DOI 10.1089/end.2014.0616
View details for Web of Science ID 000354037000020
View details for PubMedID 25356517
Crowd-Sourced Assessment of Technical Skill: A Valid Method for Discriminating Basic Robotic Surgery Skills.
Journal of endourology
2015; 29 (11): 1295–1301
A surgeon's skill in the operating room has been shown to correlate with a patient's clinical outcome. The prompt accurate assessment of surgical skill remains a challenge, in part, because expert faculty reviewers are often unavailable. By harnessing the power of large readily available crowds through the Internet, rapid, accurate, and low-cost assessments may be achieved. We hypothesized that assessments provided by crowd workers highly correlate with expert surgeons' assessments.A group of 49 surgeons from two hospitals performed two dry-laboratory robotic surgical skill assessment tasks. The performance of these tasks was video recorded and posted online for evaluation using Amazon Mechanical Turk. The surgical tasks in each video were graded by (n=30) varying crowd workers and (n=3) experts using a modified global evaluative assessment of Robotic Skills (GEARS) grading tool, and the mean scores were compared using Cronbach's alpha statistic.GEARS evaluations from the crowd were obtained for each video and task and compared with the GEARS ratings from the expert surgeons. The crowd-based performance scores agreed with the performance assessments by experts with a Cronbach's alpha of 0.84 and 0.92 for the two tasks, respectively.The assessment of surgical skill by crowd workers resulted in a high degree of agreement with the scores provided by expert surgeons in the evaluation of basic robotic surgical dry-laboratory tasks. Crowd responses cost less and were much faster to acquire. This study provides evidence that crowds may provide an adjunctive method for rapidly providing feedback of skills to training and practicing surgeons.
View details for PubMedID 26057232
- Quantifying surgical skill: using the wisdom of crowds ELSEVIER SCIENCE INC. 2014: E158–E159
- Using crowd-assessment to support surgical training in the developing world ELSEVIER SCIENCE INC. 2014: E40
- Preliminary Articulable Probe Designs With RAVEN and Challenges: Image-Guided Robotic Surgery Multitool System JOURNAL OF MEDICAL DEVICES-TRANSACTIONS OF THE ASME 2014; 8 (1)
Raven surgical robot training in preparation for da vinci.
Studies in health technology and informatics
2014; 196: 135-141
The rapid adoption of robotic assisted surgery challenges the pace at which adequate robotic training can occur due to access limitations to the da Vinci robot. Thirty medical students completed a randomized controlled trial evaluating whether the Raven robot could be used as an alternative training tool for the Fundamentals of Laparoscopic Surgery (FLS) block transfer task on the da Vinci robot. Two groups, one trained on the da Vinci and one trained on the Raven, were tested on a criterion FLS block transfer task on the da Vinci. After robotic FLS block transfer proficiency training there was no statistically significant difference between path length (p=0.39) and economy of motion scores (p=0.06) between the two groups, but those trained on the da Vinci did have faster task times (p=0.01). These results provide evidence for the value of using the Raven robot for training prior to using the da Vinci surgical system for similar tasks.
View details for PubMedID 24732494
- SurgTrak - A Universal Platform for Quantitative Surgical Data Capture JOURNAL OF MEDICAL DEVICES-TRANSACTIONS OF THE ASME 2013; 7 (3)
Virtual Reality Robotic Surgery Warm-Up Improves Task Performance in a Dry Laboratory Environment: A Prospective Randomized Controlled Study
JOURNAL OF THE AMERICAN COLLEGE OF SURGEONS
2013; 216 (6): 1181-1192
Preoperative simulation warm-up has been shown to improve performance and reduce errors in novice and experienced surgeons, yet existing studies have only investigated conventional laparoscopy. We hypothesized that a brief virtual reality (VR) robotic warm-up would enhance robotic task performance and reduce errors.In a 2-center randomized trial, 51 residents and experienced minimally invasive surgery faculty in General Surgery, Urology, and Gynecology underwent a validated robotic surgery proficiency curriculum on a VR robotic simulator and on the da Vinci surgical robot (Intuitive Surgical Inc). Once they successfully achieved performance benchmarks, surgeons were randomized to either receive a 3- to 5-minute VR simulator warm-up or read a leisure book for 10 minutes before performing similar and dissimilar (intracorporeal suturing) robotic surgery tasks. The primary outcomes compared were task time, tool path length, economy of motion, technical, and cognitive errors.Task time (-29.29 seconds, p = 0.001; 95% CI, -47.03 to -11.56), path length (-79.87 mm; p = 0.014; 95% CI, -144.48 to -15.25), and cognitive errors were reduced in the warm-up group compared with the control group for similar tasks. Global technical errors in intracorporeal suturing (0.32; p = 0.020; 95% CI, 0.06-0.59) were reduced after the dissimilar VR task. When surgeons were stratified by earlier robotic and laparoscopic clinical experience, the more experienced surgeons (n = 17) demonstrated significant improvements from warm-up in task time (-53.5 seconds; p = 0.001; 95% CI, -83.9 to -23.0) and economy of motion (0.63 mm/s; p = 0.007; 95% CI, 0.18-1.09), and improvement in these metrics was not statistically significantly appreciated in the less-experienced cohort (n = 34).We observed significant performance improvement and error reduction rates among surgeons of varying experience after VR warm-up for basic robotic surgery tasks. In addition, the VR warm-up reduced errors on a more complex task (robotic suturing), suggesting the generalizability of the warm-up.
View details for DOI 10.1016/j.jamcollsurg.2013.02.012
View details for Web of Science ID 000319039900020
View details for PubMedID 23583618
View details for PubMedCentralID PMC4082669
Raven-II: An Open Platform for Surgical Robotics Research
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING
2013; 60 (4): 954-959
The Raven-II is a platform for collaborative research on advances in surgical robotics. Seven universities have begun research using this platform. The Raven-II system has two 3-DOF spherical positioning mechanisms capable of attaching interchangeable four DOF instruments. The Raven-II software is based on open standards such as Linux and ROS to maximally facilitate software development. The mechanism is robust enough for repeated experiments and animal surgery experiments, but is not engineered to sufficient safety standards for human use. Mechanisms in place for interaction among the user community and dissemination of results include an electronic forum, an online software SVN repository, and meetings and workshops at major robotics conferences.
View details for DOI 10.1109/TBME.2012.2228858
View details for Web of Science ID 000316812200011
View details for PubMedID 23204264
Content and Construct Validation of a Robotic Surgery Curriculum Using an Electromagnetic Instrument Tracker
JOURNAL OF UROLOGY
2012; 188 (3): 919-923
Rapid adoption of robot-assisted surgery has outpaced our ability to train novice roboticists. Objective metrics are required to adequately assess robotic surgical skills and yet surrogates for proficiency, such as economy of motion and tool path metrics, are not readily accessible directly from the da Vinci® robot system. The trakSTAR™ Tool Tip Tracker is a widely available, cost-effective electromagnetic position sensing mechanism by which objective proficiency metrics can be quantified. We validated a robotic surgery curriculum using the trakSTAR device to objectively capture robotic task proficiency metrics.Through an institutional review board approved study 10 subjects were recruited from 2 surgical experience groups (novice and experienced). All subjects completed 3 technical skills modules, including block transfer, intracorporeal suturing/knot tying (fundamentals of laparoscopic surgery) and ring tower transfer, using the da Vinci robot with the trakSTAR device affixed to the robotic instruments. Recorded objective metrics included task time and path length, which were used to calculate economy of motion. Student t test statistics were performed using STATA®.The novice and experienced groups consisted of 5 subjects each. The experienced group outperformed the novice group in all 3 tasks. Experienced surgeons described the simulator platform as useful for training and agreed with incorporating it into a residency curriculum.Robotic surgery curricula can be validated by an off-the-shelf instrument tracking system. This platform allows surgical educators to objectively assess trainees and may provide credentialing offices with a means of objectively assessing any surgical staff member seeking robotic surgery privileges at an institution.
View details for DOI 10.1016/j.juro.2012.05.005
View details for Web of Science ID 000307551200091
View details for PubMedID 22819403