Kathryn McDonald is the executive director of CHP/PCOR and a senior scholar at the centers. She is also associate director of the Stanford-UCSF Evidence-based Practice Center (under RAND). Her work focuses on measures and interventions to achieve evidence-based patient-centered healthcare quality and patient safety.
McDonald has served as a project director and principal investigator on a number of research projects at the Stanford School of Medicine, including the development and ongoing enhancement of the Quality and Patient Safety Indicators for the Agency for Healthcare Research and Quality. She has authored numerous peer-reviewed articles and government reports, including several with wide enough followership to merit recent updates: Care Coordination Measures Atlas, Closing the Quality Gap, and Patient Safety Practices. She served on the Institute of Medicine Committee that produced Measuring What Matters: Pediatric and Adolescent Health and Health Care and currently is part of the IOM Committee on Diagnostic Errors in Health Care.
Previously, she worked as a manager for technology optimization and business development at Stanford Hospital, and as a research and development manager for new product development for a medical device company. She received a PhD in Health Policy from UC Berkeley with a specialization in organizations and management, a master of management degree (MBA and MHA equivalent) from Northwestern University's Kellogg School of Management, with an emphasis on the healthcare industry, and she holds a BS in chemical engineering from Stanford University.
Executive Director, Center for Health Policy (Freeman Spogli Institute for International Studies) (2003 - Present)
Executive Director, Center for Primary Care and Outcomes Research (Stanford School of Medicine) (1998 - Present)
Co-Director, Health Services and Policy Scholarly Concentration (Stanford School of Medicine) (2003 - Present)
Past President, Society for Medical Decision Making (2010 - 2011)
President, Society for Medical Decision Making (2009 - 2010)
President-Elect, Society for Medical Decision Making (2008 - 2009)
Board of Trustees, Society for Medical Decision Making (2000 - 2005)
Honors & Awards
Arthur Anderson Award, Northwestern University (1990)
Austin Scholar, Northwestern University (1990)
Outstanding Student in Health Services Management Award, Northwestern University (1992)
Most Outstanding Abstracts Award, Academy Health Annual Research Meeting (2006)
Teaching Award, Stanford Department of Medicine (2006)
Saenger Distinguished Service Award, Society for Medical Decision Making (2007)
Boards, Advisory Committees, Professional Organizations
Board Chair, Relational Coordination Research Collaboration (2016 - Present)
Co-Chair, Society to Improve Diagnosis in Medicine Patient Engagement Committee (2012 - Present)
Associate Editor, Diagnosis (2014 - Present)
PhD, University of California, Berkeley, School of Public Health, Health Policy, Organization and Management (2017)
MM/MBA, Northwestern University, Kellogg School of Management, Management, Health Services (1992)
BS, Stanford University, Chemical Engineering (1984)
Community and International Work
Health and Governance
Opportunities for Student Involvement
Technological Changes in Healthcare (TECH), Stanford Coordinating Center
Determinants and effects of medical technology use and diffusion internationally
Multiple research institutions
Opportunities for Student Involvement
Kathryn McDonald, Laura Winfrey, Michael Gorin, James Hill, Po Hsu. "United States Patent 5,039,617 Capillary flow device and method for measuring activated partial thromboplastin time", BIOTRACK, Inc, Aug 13, 1991
Current Research and Scholarly Interests
My research and educational activities focus on evidence-based medicine, patient-centered concerns, health care quality, and patient safety. I enjoy forming and participating in multi-disciplinary teams to identify, highlight and formulate solutions to issues of suboptimal performance of the health care system, both in the US and abroad.
Integrating Adjuvant Analgesics into Perioperative Pain Practice: Results from an Academic Medical Center.
Pain medicine (Malden, Mass.)
BACKGROUND: Opioid-sparing postoperative pain management therapies are important considering the opioid epidemic. Total knee arthroplasty (TKA) is a common and painful procedure accounting for a large number of opioid prescriptions. Adjuvant analgesics, nonopioid drugs with primary indications other than pain, have shown beneficial pain management and opioid-sparing effects following TKA in clinical trials. We evaluated the adjuvant analgesic gabapentin for its usage patterns and its effects on opioid use, pain, and readmissions.METHODS: This retrospective, observational study included 4,046 patients who received primary TKA between 2009 and 2017 using electronic health records from an academic tertiary care medical institute. Descriptive statistics and multivariate modeling were used to estimate associations between inpatient gabapentin use and adverse pain outcomes as well as inpatient oral morphine equivalents per day (OME).RESULTS: Overall, there was an 8.72% annual increase in gabapentin use (P<0.001). Modeled estimates suggest that gabapentin is associated with a significant decrease in opioid consumption (estimate = 0.63, 95% confidence interval = 0.49-0.82, P<0.001) when controlling for patient characteristics. Patients receiving gabapentin had similar discharge pain scores, follow-up pain scores, and 30-day unplanned readmission rates compared with patients receiving no adjuvant analgesics (P>0.05).CONCLUSIONS: When assessed in a real-world setting over a large cohort of TKA patients, gabapentin is an effective pain management therapy that is associated with reduced opioid consumption-a national priority in this time of opioid crisis-while maintaining the same quality of pain management.
View details for PubMedID 30933284
A Qualitative Analysis of Outpatient Medication Use in Community Settings: Observed Safety Vulnerabilities and Recommendations for Improved Patient Safety.
Journal of patient safety
OBJECTIVE: The aim of the study was to analyze diverse patients' experiences throughout the medication use process to inform the development of overarching interventions that support safe medication use in community settings.METHODS: Using a qualitative observational approach, we conducted approximately 18 hours of direct observation of the medication use process across multiple settings for a sample of vulnerable, high-risk patients. Observers recorded detailed field notes during the observations. To enrich the observational findings, we also conducted six semistructured interviews with medication safety experts representing a diversity of perspectives. Barriers and facilitators to safe medication use were identified based on inductive coding of the data.RESULTS: A variety of safety vulnerabilities plague all stages of the medication use process and many of the well-established evidence-based interventions aimed at improving the safety of medication use at key stages of the process have not been widely implemented in community settings observed in this study. Key safety vulnerabilities identified include: limited English proficiency, low health literacy, lack of clinician continuity, incomplete medication reconciliation and counseling, unsafe medication storage and disposal habits, and conflicting healthcare agendas with caregivers.CONCLUSIONS: Our findings underscore a need for overarching, comprehensive interventions that span the entire process of medication use, including integrated communication systems between clinicians, pharmacies, and patients, and a "patient navigator" program that assists patients in navigating the entire medication-taking process. Collective ownership of the medication management system and mutual motivation for devising collaborative solutions is needed among key sectors.
View details for PubMedID 30882615
Utilization of Prostate Cancer Quality Metrics for Research and Quality Improvement: A Structured Review
JOINT COMMISSION JOURNAL ON QUALITY AND PATIENT SAFETY
2019; 45 (3): 217–26
The shift toward value-based care in the United States emphasizes the role of quality measures in payment models. Many diseases, such as prostate cancer, have a proliferation of quality measures, resulting in resource burden and physician burnout. This study aimed to identify and summarize proposed prostate cancer quality measures and describe their frequency and use in peer-reviewed literature.The PubMed database was used to identify quality measures relevant to prostate cancer care, and included articles in English through April 2018. A gray literature search for other documents was also conducted. After the selection process of the pertinent articles, measure characteristics were abstracted, and uses were summarized for the 10 most frequently utilized measures in the literature.A total of 26 articles were identified for review. Of the 71 proposed prostate cancer quality measures, only 47 were used, and less than 10% of these were endorsed by the National Quality Forum. Process measures were most frequently reported (84.5%). Only 6 outcome measures (8.5%) were proposed-none of which were among the most frequently utilized.Although a high number of proposed prostate cancer quality measures are reported in the literature, few were assessed, and the majority of these were non-endorsed process measures. Process measures were most commonly assessed; outcome measures were rarely evaluated. In a step to close the quality chasm, a "top 5" core set of quality measures for prostate cancer care, including structure, process, and outcomes measures, is suggested. Future studies should consider this comprehensive set of quality measures.
View details for DOI 10.1016/j.jcjq.2018.06.004
View details for Web of Science ID 000461797400013
View details for PubMedID 30236510
Organizational Influences on Time Pressure Stressors and Potential Patient Consequences in Primary Care.
BACKGROUND: Primary care teams face daily time pressures both during patient encounters and outside of appointments.OBJECTIVES: We theorize 2 types of time pressure, and test hypotheses about organizational determinants and patient consequences of time pressure.RESEARCH DESIGN: Cross-sectional, observational analysis of data from concurrent surveys of care team members and their patients.SUBJECTS: Patients (n=1291 respondents, 73.5% response rate) with diabetes and/or coronary artery disease established with practice teams (n=353 respondents, 84% response rate) at 16 primary care sites, randomly selected from 2 Accountable Care Organizations.MEASURES AND ANALYSIS: We measured team member perceptions of 2 potentially distinct time pressure constructs: (1) encounter-level, from 7 questions about likelihood that time pressure results in missing patient management opportunities, and (2) practice-level, using practice atmosphere rating from calm to chaotic. The Patient Assessment of Chronic Illness Care (PACIC-11) instrument measured patient-reported experience. Multivariate logistic regression models examined organizational predictors of each time pressure type, and hierarchical models examined time pressure predictors of patient-reported experiences.RESULTS: Encounter-level and practice-level time pressure measures were not correlated, nor predicted by the same organizational variables, supporting the hypothesis of two distinct time pressure constructs. More encounter-level time pressure was most strongly associated with less health information technology capability (odds ratio, 0.33; P<0.01). Greater practice-level time pressure (chaos) was associated with lower PACIC-11 scores (odds ratio, 0.74; P<0.01).CONCLUSIONS: Different organizational factors are associated with each forms of time pressure. Potential consequences for patients are missed opportunities in patient care and inadequate chronic care support.
View details for PubMedID 30130270
Utilization and effectiveness of multimodal discharge analgesia for postoperative pain management.
The Journal of surgical research
2018; 228: 160–69
BACKGROUND: Although evidence-based guidelines recommend a multimodal approach to pain management, limited information exists on adherence to these guidelines and its association with outcomes in a generalized population. We sought to assess the association between discharge multimodal analgesia and postoperative pain outcomes in two diverse health care settings.METHODS: We evaluated patients undergoing four common surgeries associated with high pain in electronic health records from an academic hospital (AH) and Veterans Health Administration (VHA). Multimodal analgesia at discharge was characterized as opioids in combination with acetaminophen (O+A) and nonsteroidal antiinflammatory (O+A+N) drugs. Hierarchical models estimated associations of analgesia with 45-d follow-up pain scores and 30-d readmissions.RESULTS: We identified 7893 patients at AH and 34,581 at VHA. In both settings, most patients were discharged with O+A (60.6% and 54.8%, respectively), yet a significant proportion received opioids alone (AH: 24.3% and VHA: 18.8%). Combining acetaminophen with opioids was associated with decreased follow-up pain in VHA (Odds ratio [OR]: 0.86, 95% confidence interval [CI]: 0.79, 0.93) and readmissions (AH OR: 0.74, CI: 0.60, 0.90; VHA OR: 0.89, CI: 0.82, 0.96). Further addition of nonsteroidal antiinflammatory drugs was associated with further decreased follow-up pain (AH OR: 0.71, CI: 0.53, 0.96; VHA OR: 0.77, CI: 0.69, 0.86) and readmissions (AH OR: 0.46, CI: 0.31, 0.69; VHA OR: 0.84, CI: 0.76, 0.93). In both systems, patients receiving multimodal analgesia received 10%-40% less opioids per day compared to opioids only.CONCLUSIONS: A majority of surgical patients receive a multimodal pain approach at discharge yet many receive only opioids. Multimodal regimen at discharge was associated with better follow-up pain and all-cause readmissions compared to the opioid-only regimen.
View details for PubMedID 29907207
- Pragmatic Insights on Patient Safety Priorities and Intervention Strategies in Ambulatory Settings JOINT COMMISSION JOURNAL ON QUALITY AND PATIENT SAFETY 2017; 43 (12): 661–70
Effect of Medicare's Nonpayment Policy on Surgical Site Infections Following Orthopedic Procedures.
Infection control and hospital epidemiology
OBJECTIVE Orthopedic procedures are an important focus in efforts to reduce surgical site infections (SSIs). In 2008, the Centers for Medicare and Medicaid (CMS) stopped reimbursements for additional charges associated with serious hospital-acquired conditions, including SSI following certain orthopedic procedures. We aimed to evaluate the CMS policy's effect on rates of targeted orthopedic SSIs among the Medicare population. DESIGN We examined SSI rates following orthopedic procedures among the Medicare population before and after policy implementation compared to a similarly aged control group. Using the Nationwide Inpatient Sample database for 2000-2013, we estimated rate ratios (RRs) of orthopedic SSIs among Medicare and non-Medicare patients using a difference-in-differences approach. RESULTS Following policy implementation, SSIs significantly decreased among both the Medicare and non-Medicare populations (RR, 0.7; 95% confidence interval [CI], 0.6-0.8) and RR, 0.8l; 95% CI, 0.7-0.9), respectively. However, the estimated decrease among the Medicare population was not significantly greater than the decrease among the control population (RR, 0.9; 95% CI, 0.8-1.1). CONCLUSIONS While SSI rates decreased significantly following the implementation of the CMS nonpayment policy, this trend was not associated with policy intervention but rather larger secular trends that likely contributed to decreasing SSI rates over time. Infect Control Hosp Epidemiol 2017;1-6.
View details for DOI 10.1017/ice.2017.86
View details for PubMedID 28487001
Development and Validation of the Agency for Healthcare Research and Quality Measures of Potentially Preventable Emergency Department (ED) Visits: The ED Prevention Quality Indicators for General Health Conditions.
Health services research
To develop and validate rates of potentially preventable emergency department (ED) visits as indicators of community health.Agency for Healthcare Research and Quality, Healthcare Cost and Utilization Project 2008-2010 State Inpatient Databases and State Emergency Department Databases.Empirical analyses and structured panel reviews.Panels of 14-17 clinicians and end users evaluated a set of ED Prevention Quality Indicators (PQIs) using a Modified Delphi process. Empirical analyses included assessing variation in ED PQI rates across counties and sensitivity of those rates to county-level poverty, uninsurance, and density of primary care physicians (PCPs).ED PQI rates varied widely across U.S. communities. Indicator rates were significantly associated with county-level poverty, median income, Medicaid insurance, and levels of uninsurance. A few indicators were significantly associated with PCP density, with higher rates in areas with greater density. A clinical and an end-user panel separately rated the indicators as having strong face validity for most uses evaluated.The ED PQIs have undergone initial validation as indicators of community health with potential for use in public reporting, population health improvement, and research.
View details for DOI 10.1111/1475-6773.12687
View details for PubMedID 28369814
Drug-Free Interventions to Reduce Pain or Opioid Consumption After Total Knee Arthroplasty: A Systematic Review and Meta-analysis.
There is increased interest in nonpharmacological treatments to reduce pain after total knee arthroplasty. Yet, little consensus supports the effectiveness of these interventions.To systematically review and meta-analyze evidence of nonpharmacological interventions for postoperative pain management after total knee arthroplasty.Database searches of MEDLINE (PubMed), EMBASE (OVID), Cochrane Central Register of Controlled Trials (CENTRAL), Cochrane Database of Systematic Reviews, Web of Science (ISI database), Physiotherapy Evidence (PEDRO) database, and ClinicalTrials.gov for the period between January 1946 and April 2016.Randomized clinical trials comparing nonpharmacological interventions with other interventions in combination with standard care were included.Two reviewers independently extracted the data from selected articles using a standardized form and assessed the risk of bias. A random-effects model was used for the analyses.Postoperative pain and consumption of opioids and analgesics.Of 5509 studies, 39 randomized clinical trials were included in the meta-analysis (2391 patients). The most commonly performed interventions included continuous passive motion, preoperative exercise, cryotherapy, electrotherapy, and acupuncture. Moderate-certainty evidence showed that electrotherapy reduced the use of opioids (mean difference, -3.50; 95% CI, -5.90 to -1.10 morphine equivalents in milligrams per kilogram per 48 hours; P = .004; I2 = 17%) and that acupuncture delayed opioid use (mean difference, 46.17; 95% CI, 20.84 to 71.50 minutes to the first patient-controlled analgesia; P < .001; I2 = 19%). There was low-certainty evidence that acupuncture improved pain (mean difference, -1.14; 95% CI, -1.90 to -0.38 on a visual analog scale at 2 days; P = .003; I2 = 0%). Very low-certainty evidence showed that cryotherapy was associated with a reduction in opioid consumption (mean difference, -0.13; 95% CI, -0.26 to -0.01 morphine equivalents in milligrams per kilogram per 48 hours; P = .03; I2 = 86%) and in pain improvement (mean difference, -0.51; 95% CI, -1.00 to -0.02 on the visual analog scale; P < .05; I2 = 62%). Low-certainty or very low-certainty evidence showed that continuous passive motion and preoperative exercise had no pain improvement and reduction in opioid consumption: for continuous passive motion, the mean differences were -0.05 (95% CI, -0.35 to 0.25) on the visual analog scale (P = .74; I2 = 52%) and 6.58 (95% CI, -6.33 to 19.49) opioid consumption at 1 and 2 weeks (P = .32, I2 = 87%), and for preoperative exercise, the mean difference was -0.14 (95% CI, -1.11 to 0.84) on the Western Ontario and McMaster Universities Arthritis Index Scale (P = .78, I2 = 65%).In this meta-analysis, electrotherapy and acupuncture after total knee arthroplasty were associated with reduced and delayed opioid consumption.
View details for PubMedID 28813550
Opioid Abuse And Poisoning: Trends In Inpatient And Emergency Department Discharges.
Health affairs (Project Hope)
2017; 36 (10): 1748–53
Addressing the opioid epidemic is a national priority. We analyzed national trends in inpatient and emergency department (ED) discharges for opioid abuse, dependence, and poisoning using Healthcare Cost and Utilization Project data. Inpatient and ED discharge rates increased overall across the study period, but a decline was observed for prescription opioid-related discharges beginning in 2010, while a sharp increase in heroin-related discharges began in 2008.
View details for PubMedID 28971919
Evaluating patient safety indicators in orthopedic surgery between Italy and the USA
INTERNATIONAL JOURNAL FOR QUALITY IN HEALTH CARE
2016; 28 (4): 486-491
To compare patient safety in major orthopedic procedures between an orthopedic hospital in Italy, and 26 US hospitals of similar size.Retrospective analysis of administrative data from hospital discharge records in Italy and Florida, USA, 2011-13. Patient Safety Indicators (PSIs) developed by the Agency for Healthcare Quality and Research were used to identify inpatient adverse events (AEs). We examined the factors associated with the development of each different PSI, taking into account known confounders, using logistic regression.One Italian orthopedic hospital and 26 hospitals in Florida with ≥ 1000 major orthopedic procedures per year.Patients ≥ 18 years who underwent 1 of the 17 major orthopedic procedures, and with a length of stay (LOS) > 1 day.Patient Safety management between Italy and the USA.Patient Safety Indicators.A total of 14 393 patients in Italy (mean age = 59.8 years) and 131 371 in the USA (mean age = 65.4 years) were included. US patients had lower adjusted odds of developing a PSI compared to Italy for pressure ulcers (odds ratio [OR]: 0.21; 95% confidence interval [CI]: 0.10-0.45), hemorrhage or hematoma (OR: 0.42; CI 0.23-0.78), physiologic and metabolic derangement (OR: 0.08; CI 0.02-0.37). Italian patients had lower odds of pulmonary embolism/deep vein thrombosis (OR: 3.17; CI 2.16-4.67) compared to US patients.Important differences in patient safety events were identified across countries using US developed PSIs. Though caution about potential coding differences is wise when comparing PSIs internationally, other differences may explain AEs, and offer opportunities for cross-country learning about safe practices.
View details for DOI 10.1093/intqhc/mzw053
View details for Web of Science ID 000384660300008
View details for PubMedID 27272404
Risks of adverse events in colorectal patients: population-based study
JOURNAL OF SURGICAL RESEARCH
2016; 202 (2): 328-334
Postoperative (PO) outcomes are rapidly being integrated into value-based purchasing programs and associated penalties are slated for inclusion in the near future. Colorectal surgery procedures are extremely common and account for a high proportion of morbidity among general surgery. We sought to assess adverse events in colorectal surgical patients.We performed a retrospective study using the Nationwide Inpatient Sample database, 2008-2012. Patients were identified using International Classification of Diseases, Ninth Revision, Clinical Modification codes and classified based on procedure indication: colon cancer, benign polyps, diverticulitis, inflammatory bowel disease, and ischemic colitis. The outcome of interest was inpatient adverse event identified by Agency for Healthcare Research and Quality's patient safety indicators (PSIs).We identified 1,100,184 colorectal patients who underwent major surgery; 2.7% developed a PSI during their hospital stay. Compared to all colorectal patients, those with ischemic colitis had significantly higher risk-adjusted rates per 1000 case for pressure ulcer (50.20), failure to rescue (211.30), central line bloodstream infection (2.33) and PO DE/deep vein thrombosis (16.02), and sepsis (46.99), whereas benign polyps were associated with significantly lower risk-adjusted rates per 1000 cases for pressure ulcer (11.48), failure to rescue (84.79), central line bloodstream infection (0.97) and PO pulmonary embolism/deep vein thrombosis (4.81), and sepsis (11.23). Compared to both patient demographic and clinical characteristics, the procedure indication was the strongest predictor of any PSI relevant to colorectal surgery; patients with ischemic colitis had higher odds of experiencing a PSI (odds ratio, 1.84; 95% confidence interval, 1.71-1.99) compared with cancer patients.Among colorectal surgery patients, inpatient events were not uncommon. We found important differential rates of adverse events by diagnostic category, with the highest odds ratio occurring in patients undergoing surgery for ischemic colitis. Our work elaborates the need for rigorous risk adjustment, quality improvement strategies for high-risk populations, and attention to detail in calculating financial incentives in emerging value-based purchasing programs.
View details for DOI 10.1016/j.jss.2016.01.013
View details for Web of Science ID 000376334700013
View details for PubMedID 27229107
Electronic Health Records and Quality of Care: An Observational Study Modeling Impact on Mortality, Readmissions, and Complications
2016; 95 (19)
Electronic health records (EHRs) were implemented to improve quality of care and patient outcomes. This study assessed the relationship between EHR-adoption and patient outcomes.We performed an observational study using State Inpatient Databases linked to American Hospital Association survey, 2011. Surgical and medical patients from 6 large, diverse states were included. We performed univariate analyses and developed hierarchical regression models relating level of EHR utilization and mortality, readmission rates, and complications. We evaluated the effect of EHR adoption on outcomes in a difference-in-differences analysis, 2008 to 2011.Medical and surgical patients sought care at hospitals reporting no EHR (3.5%), partial EHR (55.2%), and full EHR systems (41.3%). In univariate analyses, patients at hospitals with full EHR had the lowest rates of inpatient mortality, readmissions, and Patient Safety Indicators followed by patients at hospitals with partial EHR and then patients at hospitals with no EHR (P < 0.05). However, these associations were not robust when accounting for other patient and hospital factors, and adoption of an EHR system was not associated with improved patient outcomes (P > 0.05).These results indicate that patients receiving medical and surgical care at hospitals with no EHR system have similar outcomes compared to patients seeking care at hospitals with a full EHR system, after controlling for important confounders.To date, we have not yet seen the promised benefits of EHR systems on patient outcomes in the inpatient setting. EHRs may play a smaller role than expected in patient outcomes and overall quality of care.
View details for DOI 10.1097/MD.0000000000003332
View details for Web of Science ID 000376927000010
View details for PubMedID 27175631
View details for PubMedCentralID PMC4902473
Performance Measures in Neurosurgical Patient Care Differing Applications of Patient Safety Indicators
2016; 54 (4): 359-364
Patient Safety Indicators (PSIs) are administratively coded identifiers of potentially preventable adverse events. These indicators are used for multiple purposes, including benchmarking and quality improvement efforts. Baseline PSI evaluation in high-risk surgeries is fundamental to both purposes.Determine PSI rates and their impact on other outcomes in patients undergoing cranial neurosurgery compared with other surgeries.The Agency for Healthcare Research and Quality (AHRQ) PSI software was used to flag adverse events and determine risk-adjusted rates (RAR). Regression models were built to assess the association between PSIs and important patient outcomes.We identified cranial neurosurgeries based on International Classification of Diseases, Ninth Revision, Clinical Modification codes in California, Florida, New York, Arkansas, and Mississippi State Inpatient Databases, AHRQ, 2010-2011.PSI development, 30-day all-cause readmission, length of stay, hospital costs, and inpatient mortality.A total of 48,424 neurosurgical patients were identified. Procedure indication was strongly associated with PSI development. The neurosurgical population had significantly higher RAR of most PSIs evaluated compared with other surgical patients. Development of a PSI was strongly associated with increased length of stay and hospital cost and, in certain PSIs, increased inpatient mortality and 30-day readmission.In this population-based study, certain accountability measures proposed for use as value-based payment modifiers show higher RAR in neurosurgery patients compared with other surgical patients and were subsequently associated with poor outcomes. Our results indicate that for quality improvement efforts, the current AHRQ risk-adjustment models should be viewed in clinically meaningful stratified subgroups: for profiling and pay-for-performance applications, additional factors should be included in the risk-adjustment models. Further evaluation of PSIs in additional high-risk surgeries is needed to better inform the use of these metrics.
View details for DOI 10.1097/MLR.0000000000000490
View details for Web of Science ID 000372935200004
Medicaid Dental Coverage Alone May Not Lower Rates Of Dental Emergency Department Visits
2015; 34 (8): 1349-1357
Medicaid was expanded to millions of individuals under the Affordable Care Act, but many states do not provide dental coverage for adults under their Medicaid programs. In the absence of dental coverage, patients may resort to costly emergency department (ED) visits for dental conditions. Medicaid coverage of dental benefits could help ease the burden on the ED, but ED use for dental conditions might remain a problem in areas with a scarcity of dentists. We examined county-level rates of ED visits for nontraumatic dental conditions in twenty-nine states in 2010 in relation to dental provider density and Medicaid coverage of nonemergency dental services. Higher density of dental providers was associated with lower rates of dental ED visits by patients with Medicaid in rural counties but not in urban counties, where most dental ED visits occurred. County-level Medicaid-funded dental ED visit rates were lower in states where Medicaid covered nonemergency dental services than in other states, although this difference was not significant after other factors were adjusted for. Providing dental coverage alone might not reduce Medicaid-funded dental ED visits if patients do not have access to dental providers.
View details for DOI 10.1377/hlthaff.2015.0223
View details for Web of Science ID 000361141000015
Patient safety in plastic surgery: identifying areas for quality improvement efforts.
Annals of plastic surgery
2015; 74 (5): 597-602
Improving quality of health care is a global priority. Before quality benchmarks are established, we first must understand rates of adverse events (AEs). This project assessed risk-adjusted rates of inpatient AEs for soft tissue reconstructive procedures.Patients receiving soft tissue reconstructive procedures from 2005 to 2010 were extracted from the Nationwide Inpatient Sample. Inpatient AEs were identified using patient safety indicators (PSIs), established measures developed by Agency for Healthcare Research and Quality.We identified 409,991 patients with soft tissue reconstruction and 16,635 (4.06%) had a PSI during their hospital stay. Patient safety indicators were associated with increased risk-adjusted mortality, longer length of stay, and decreased routine disposition (P < 0.01). Patient characteristics associated with a higher risk-adjusted rate per 1000 patients at risk included older age, men, nonwhite, and public payer (P < 0.05). Overall, plastic surgery patients had significantly lower risk-adjusted rate compared to other surgical inpatients for all events evaluated except for failure to rescue and postoperative hemorrhage or hematoma, which were not statistically different. Risk-adjusted rates of hematoma hemorrhage were significantly higher in patients receiving size-reduction surgery, and these rates were further accentuated when broken down by sex and payer.In general, plastic surgery patients had lower rates of in-hospital AEs than other surgical disciplines, but PSIs were not uncommon. With the establishment of national basal PSI rates in plastic surgery patients, benchmarks can be devised and target areas for quality improvement efforts identified. Further prospective studies should be designed to elucidate the drivers of AEs identified in this population.
View details for DOI 10.1097/SAP.0b013e318297791e
View details for PubMedID 24108144
Impact of including readmissions for qualifying events in the patient safety indicators.
American journal of medical quality
2015; 30 (2): 114-118
The Agency for Healthcare Research and Quality Patient Safety Indicators (PSIs) do not capture complications arising after discharge. This study sought to quantify the bias related to omission of readmissions for PSI-qualifying conditions. Using 2000-2009 California Office of Statewide Health Planning and Development Patient Discharge Data, the study team examined the change in PSI rates when including readmissions in the numerator, hospitals performing in the extreme deciles, and longitudinal performance. Including 7-day readmissions resulted in a 0.3% to 8.9% increase in average hospital PSI rates. Hospital PSI rates with and without PSI-qualifying 30-day readmissions were highly correlated for point estimates and within-hospital longitudinal change. Most hospitals remained in the same relative performance decile. Longer length of stay, public payer, and discharge to skilled nursing facilities were associated with a higher risk of readmission for a PSI-qualifying event. Failure to include readmissions in calculating PSIs is unlikely to lead to erroneous conclusions.
View details for DOI 10.1177/1062860613518341
View details for PubMedID 24463327
Interhospital Facility Transfers in the United States: A Nationwide Outcomes Study.
Journal of patient safety
Patient transfers between hospitals are becoming more common in the United States. Disease-specific studies have reported varying outcomes associated with transfer status. However, even as national quality improvement efforts and regulations are being actively adopted, forcing hospitals to become financially accountable for the quality of care provided, surprisingly little is known about transfer patients or their outcomes at a population level. This population-wide study provides timely analyses of the characteristics of this particularly vulnerable and sizable inpatient population. We identified and compared characteristics and outcomes of transfer and nontransfer patients.With the use of the 2009 Nationwide Inpatient Sample, a nationally representative sample of U.S. hospitalizations, we examined patient characteristics, in-hospital adverse events, and discharge disposition for transfer versus nontransfer patients in this observational study.We identified 1,397,712 transfer patients and 31,692,211 nontransfer patients. Age, sex, race, and payer were significantly associated with odds of transfer (P < 0.05). Transfer patients had higher risk-adjusted inpatient mortality (4.6 versus 2.1, P < 0.01), longer length of stay (13.3 versus 4.5, P < 0.01), and fewer routine disposition discharges (53.6 versus 68.7, P < 0.01). In-hospital adverse events were significantly higher in transfer patients compared with nontransfer patients (P < 0.05).Our results suggest that transfer patients have inferior outcomes compared with nontransfer patients. Although they are clinically complex patients and assessing accountability as between the transferring and receiving hospitals is methodologically difficult, transfer patients must nonetheless be included in quality benchmark data to assess the potential impact this population has on hospital outcome profiles. With hospital accountability and value-based payments constituting an integral part of health care reform, documenting the quality of care delivered to transfer patients is essential before accurate quality assessment improvement efforts can begin in this patient population.
View details for PubMedID 25397857
The Association of Nurse-to-Patient Ratio with Mortality and Preventable Complications Following Aortic Valve Replacement.
Journal of cardiac surgery
2014; 29 (2): 141-148
To examine hospital resources associated with patient outcomes for aortic valve replacement (AVR), including inpatient adverse events and mortality.We used the Nationwide Inpatient Sample to identify AVR procedures from 1998 to 2010 and the American Hospital Association Annual Survey to augment hospital characteristics. Primary outcomes included mortality and the development of adverse events, identified using standardized patient safety indicators (PSI). Patient and hospital characteristics associated with PSI development were evaluated using univariate and multivariate analyses.An estimated 410,157 AVRs at 5009 hospitals were performed in the US between 1998 and 2010. The number of procedures grew annually by 4.72% (p = 0.0003) in high volume hospitals, 4.48% in medium volume hospitals (p < 0.0001), and 2.03% in low volume hospitals (p = 0.154). Mortality was highest in low volume hospitals, 4.70%, decreased from 4.14% to 3.73% in medium and high volume hospitals, respectively (p = 0.0002). Rates of PSIs did not vary significantly across volume terciles (p = 0.254). Multivariate logistic regression analysis showed low volume hospitals had increased risk of mortality as compared with high volume hospitals (odds ratio [OR]: 1.42; 95% confidence interval [CI]: 1.01 to 2.00), while hospital volume was not associated with adverse events. PSI development was associated with small hospitals as compared with large (OR: 1.63, 95% CI: 1.16 to 2.28) and inversely associated with higher nurse-to-patient ratio (OR: 0.94, 95% CI: 0.90 to 0.99).The volume-outcomes relationship was associated with mortality outcomes but not postoperative complications. We identified structural differences in hospital size, nurses-to-patient ratio, and nursing skill level indicative of high quality outcomes.
View details for DOI 10.1111/jocs.12284
View details for PubMedID 24417274
Empirical examination of the indicator 'pediatric gastroenteritis hospitalization rate' based on administrative hospital data in Italy
ITALIAN JOURNAL OF PEDIATRICS
Awareness of the importance of strengthening investments in child health and monitoring the quality of services in the pediatric field is increasing. The Pediatric Quality Indicators developed by the US Agency for Healthcare Research and Quality (AHRQ), use hospital administrative data to identify admissions that could be avoided through high-quality outpatient care. Building on this approach, the purpose of this study is to perform an empirical examination of the 'pediatric gastroenteritis admission rate' indicator in Italy, under the assumption that lower admission rates are associated with better management at the primary care level and with overall better quality of care for children.Following the AHRQ process for evaluating quality indicators, we examined age exclusion/inclusion criteria, selection of diagnostic codes, hospitalization type, and methodological issues for the 'pediatric gastroenteritis admission rate'. The regional variability of hospitalizations was analyzed for Italian children aged 0-17 years discharged between January 1, 2009 and December 31, 2011. We considered hospitalizations for the following diagnoses: non-bacterial gastroenteritis, bacterial gastroenteritis and dehydration (along with a secondary diagnosis of gastroenteritis). The data source was the hospital discharge records database. All rates were stratified by age.In the study period, there were 61,130 pediatric hospitalizations for non-bacterial gastroenteritis, 5,940 for bacterial gastroenteritis, and 38,820 for dehydration. In <1-year group, the relative risk of hospitalization for non-bacterial gastroenteritis was 24 times higher than in adolescents, then it dropped to 14.5 in 1- to 4-year-olds and to 3.2 in 5- to 9-year-olds. At the national level, the percentage of admissions for bacterial gastroenteritis was small compared with non-bacterial, while including admissions for dehydration revealed a significant variability in diagnostic coding among regions that affected the regional performance of the indicator.For broadest application, we propose a 'pediatric gastroenteritis admission rate' that consists of including bacterial gastroenteritis and dehydration diagnoses in the numerator, as well as infants aged <3 months. We also suggest adjusting for age and including day hospital admissions. Future evaluation by a clinical panel at the national level might be helpful to determine appropriate application for such measures, and make recommendations to policy makers.
View details for DOI 10.1186/1824-7288-40-14
View details for Web of Science ID 000331901600001
View details for PubMedID 24512747
View details for PubMedCentralID PMC3923239
Empirical validation of the "Pediatric Asthma Hospitalization Rate" indicator
ITALIAN JOURNAL OF PEDIATRICS
Quality assessment in pediatric care has recently gained momentum. Although many of the approaches to indicator development are similar regardless of the population of interest, few nationwide sets of indicators specifically designed for assessment of primary care of children exist. We performed an empirical analysis of the validity of "Pediatric Asthma Hospitalization Rate" indicator under the assumption that lower admission rates are associated with better performance of primary health care.The validity of "Pediatric Asthma Hospitalization Rate" indicator proposed by the Agency for Healthcare Research and Quality in the Italian context was investigated with a focus on selection of diagnostic codes, hospitalization type, and risk adjustment. Seasonality and regional variability of hospitalization rates for asthma were analyzed for Italian children aged 2-17 years discharged between January 1, 2009, and December 31, 2011 using the hospital discharge records database. Specific rates were computed for age classes: 2-4, 5-9, 10-14, 15-17 years.In the years 2009-2011 the number of pediatric hospitalizations for asthma was 14,389 (average annual rate: 0.52 per 1,000) with a large variability across regions. In children aged 2-4 years, the risk of hospitalization for asthma was 14 times higher than in adolescents, then it dropped to 4 in 5- to 9-year-olds and to 1.1 in 10- to 14-year-olds. The inclusion of diagnoses of bronchitis revealed that asthma and bronchitis are equally represented as causes of hospital admissions and have a similar seasonality in preschool children, while older age groups experience hospital admissions mainly in spring and fall, this pattern being consistent with a diagnosis of atopic asthma. Rates of day hospital admissions for asthma were up to 5 times higher than the national average in Liguria and some Southern regions, and close to zero in some Northern regions.The patterns of hospitalization for pediatric asthma in Italy showed that at least two different indicators are needed to measure accurately the quality of care provided to children. The candidate indicators should also include day hospital admissions to better assess accessibility. Future evaluation by a structured clinical panel review at the national level might be helpful to refine indicator definitions and risk groupings, to determine appropriate application for such measures, and to make recommendations to policy makers.
View details for DOI 10.1186/1824-7288-40-7
View details for Web of Science ID 000331899500001
View details for PubMedID 24447802
View details for PubMedCentralID PMC3899920
Hospitalization Rates and Post-Operative Mortality for Abdominal Aortic Aneurysm in Italy over the Period 2000-2011
2013; 8 (12)
Recent studies have reported declines in incidence, prevalence and mortality for abdominal aortic aneurysms (AAAs) in various countries, but evidence from Mediterranean countries is lacking. The aim of this study is to examine the trend of hospitalization and post-operative mortality rates for AAAs in Italy during the period 2000-2011, taking into account the introduction of endovascular aneurysm repair (EVAR) in 1990s.This retrospective cohort study was carried out in Emilia-Romagna, an Italian region with 4.5 million inhabitants. A total of 19,673 patients hospitalized for AAAs between 2000 and 2011, were identified from the hospital discharge records (HDR) database. Hospitalization rates, percentage of OSR and EVAR and 30-day mortality rates were calculated for unruptured (uAAAs) and ruptured AAAs (rAAAs).Adjusted hospitalization rates decreased on average by 2.9% per year for uAAAs and 3.2% for rAAAs (p<0.001). The temporal trend of 30-day mortality rates remained stable for both groups. The percentage of EVAR for uAAAs increased significantly from 2006 to 2011 (42.7 versus 60.9% respectively, mean change of 3.9% per year, p<0.001). No significant difference in mortality was found between OSR and EVAR for uAAAs and rAAAs.The incidence and trend of hospitalization rates for rAAAs and uAAAs decreased significantly in the last decade, while 30-day mortality rates in operated patients remained stable. OSR continued to be the most common surgery in rAAAs, although the gap between OSR and EVAR recently declined. The EVAR technique became the preferred surgery for uAAAs since 2008.
View details for DOI 10.1371/journal.pone.0083855
View details for Web of Science ID 000329194700075
View details for PubMedID 24386294
View details for PubMedCentralID PMC3875532
Implications of Metric Choice for Common Applications of Readmission Metrics
HEALTH SERVICES RESEARCH
2013; 48 (6): 1978-1995
OBJECTIVE: To quantify the differential impact on hospital performance of three readmission metrics: all-cause readmission (ACR), 3M Potential Preventable Readmission (PPR), and Centers for Medicare and Medicaid 30-day readmission (CMS). DATA SOURCES: 2000-2009 California Office of Statewide Health Planning and Development Patient Discharge Data Nonpublic file. STUDY DESIGN: We calculated 30-day readmission rates using three metrics, for three disease groups: heart failure (HF), acute myocardial infarction (AMI), and pneumonia. Using each metric, we calculated the absolute change and correlation between performance; the percent of hospitals remaining in extreme deciles and level of agreement; and differences in longitudinal performance. PRINCIPAL FINDINGS: Average hospital rates for HF patients and the CMS metric were generally higher than for other conditions and metrics. Correlations between the ACR and CMS metrics were highest (r = 0.67-0.84). Rates calculated using the PPR and either ACR or CMS metrics were moderately correlated (r = 0.50-0.67). Between 47 and 75 percent of hospitals in an extreme decile according to one metric remained when using a different metric. Correlations among metrics were modest when measuring hospital longitudinal change. CONCLUSIONS: Different approaches to computing readmissions can produce different hospital rankings and impact pay-for-performance. Careful consideration should be placed on readmission metric choice for these applications.
View details for DOI 10.1111/1475-6773.12075
View details for Web of Science ID 000327392300011
View details for PubMedID 23742056
Limitations of using same-hospital readmission metrics
INTERNATIONAL JOURNAL FOR QUALITY IN HEALTH CARE
2013; 25 (6): 633-639
To quantify the limitations associated with restricting readmission metrics to same-hospital only readmission.Using 2000-2009 California Office of Statewide Health Planning and Development Patient Discharge Data Nonpublic file, we identified the proportion of 7-, 15- and 30-day readmissions occurring to the same hospital as the initial admission using All-cause Readmission (ACR) and 3M Corporation Potentially Preventable Readmissions (PPR) Metric. We examined the correlation between performance using same and different hospital readmission, the percent of hospitals remaining in the extreme deciles when utilizing different metrics, agreement in identifying outliers and differences in longitudinal performance. Using logistic regression, we examined the factors associated with admission to the same hospital.68% of 30-day ACR and 70% of 30-day PPR occurred to the same hospital. Abdominopelvic procedures had higher proportions of same-hospital readmissions (87.4-88.9%), cardiac surgery had lower (72.5-74.9%) and medical DRGs were lower than surgical DRGs (67.1 vs. 71.1%). Correlation and agreement in identifying high- and low-performing hospitals was weak to moderate, except for 7-day metrics where agreement was stronger (r = 0.23-0.80, Kappa = 0.38-0.76). Agreement for within-hospital significant (P < 0.05) longitudinal change was weak (Kappa = 0.05-0.11). Beyond all patient refined-diagnostic related groups, payer was the most predictive factor with Medicare and MediCal patients having a higher likelihood of same-hospital readmission (OR 1.62, 1.73).Same-hospital readmission metrics are limited for all tested applications. Caution should be used when conducting research, quality improvement or comparative applications that do not account for readmissions to other hospitals.
View details for DOI 10.1093/intqhc/mzt068
View details for Web of Science ID 000327791600003
View details for PubMedID 24167061
View details for PubMedCentralID PMC3842125
Considering Context in Quality Improvement Interventions and Implementation: Concepts, Frameworks, and Application
2013; 13 (6): S45-S53
Growing consensus within the health care field suggests that context matters and needs more concerted study for helping those who implement and conduct research on quality improvement interventions. Health care delivery system decision makers require information about whether an intervention tested in one context will work in another with some differences from the original site. We aimed to define key terms, enumerate candidate domains for the study of context, provide examples from the pediatric quality improvement literature, and identify potential measures for selected contexts. Key sources include the organizational literature, broad evaluation frameworks, and a recent project in the patient safety area on context sensitivity. The article concludes with limitations and next steps for developments in this area.
View details for Web of Science ID 000327688700012
View details for PubMedID 24268084
Evaluating the state of quality-improvement science through evidence synthesis: insights from the closing the quality gap series.
The Permanente journal
2013; 17 (4): 52-61
The Closing the Quality Gap series from the Agency for Healthcare Research and Quality summarizes evidence for eight high-priority health care topics: outcomes used in disability research, bundled payment programs, public reporting initiatives, health care disparities, palliative care, the patient-centered medical home, prevention of health care-associated infections, and medication adherence.To distill evidence from this series and provide insight into the "state of the science" of quality improvement (QI).We provided common guidance for topic development and qualitatively synthesized evidence from the series topic reports to identify cross-topic themes, challenges, and evidence gaps as related to QI practice and science.Among topics that examined effectiveness of QI interventions, we found improvement in some outcomes but not others. Implementation context and potential harms from QI activities were not widely evaluated or reported, although market factors appeared important for incentive-based QI strategies. Patient-focused and systems-focused strategies were generally more effective than clinician-focused strategies, although the latter approach improved clinician adherence to infection prevention strategies. Audit and feedback appeared better for targeting professionals and organizations, but not patients. Topic reviewers observed heterogeneity in outcomes used for QI evaluations, weaknesses in study design, and incomplete reporting.Synthesizing evidence across topics provided insight into the state of the QI field for practitioners and researchers. To facilitate future evidence synthesis, consensus is needed around a smaller set of outcomes for use in QI evaluations and a framework and lexicon to describe QI interventions more broadly, in alignment with needs of decision makers responsible for improving quality.
View details for DOI 10.7812/TPP/13-010
View details for PubMedID 24079357
View details for PubMedCentralID PMC3854810
The patient is in: patient involvement strategies for diagnostic error mitigation
BMJ QUALITY & SAFETY
2013; 22: ii33-ii39
Although healthcare quality and patient safety have longstanding international attention, the target of reducing diagnostic errors has only recently gained prominence, even though numerous patients, families and professional caregivers have suffered from diagnostic mishaps for a long time. Similarly, patients have always been involved in their own care to some extent, but only recently have patients sought more opportunities for engagement and participation in healthcare improvements. This paper brings these two promising trends together, analysing strategies for patient involvement in reducing diagnostic errors in an individual's own care, in improving the healthcare delivery system's diagnostic safety, and in contributing to research and policy development on diagnosis-related issues.
View details for DOI 10.1136/bmjqs-2012-001623
View details for Web of Science ID 000324736900006
View details for PubMedID 23893394
View details for PubMedCentralID PMC3786634
A systematic review of the care coordination measurement landscape
BMC HEALTH SERVICES RESEARCH
Care coordination has increasingly been recognized as an important aspect of high-quality health care delivery. Robust measures of coordination processes will be essential tools to evaluate, guide and support efforts to understand and improve coordination, yet little agreement exists among stakeholders about how to best measure care coordination. We aimed to review and characterize existing measures of care coordination processes and identify areas of high and low density to guide future measure development.We conducted a systematic review of measures published in MEDLINE through April 2012 and identified from additional key sources and informants. We characterized included measures with respect to the aspects of coordination measured (domain), measurement perspective (patient/family, health care professional, system representative), applicable settings and patient populations (by age and condition), and data used (survey, chart review, administrative claims).Among the 96 included measure instruments, most relied on survey methods (88%) and measured aspects of communication (93%), in particular the transfer of information (81%). Few measured changing coordination needs (11%). Nearly half (49%) of instruments mapped to the patient/family perspective; 29% to the system representative and 27% to the health care professionals perspective. Few instruments were applicable to settings other than primary care (58%), inpatient facilities (25%), and outpatient specialty care (22%).New measures are needed that evaluate changing coordination needs, coordination as perceived by health care professionals, coordination in the home health setting, and for patients at the end of life.
View details for DOI 10.1186/1472-6963-13-119
View details for Web of Science ID 000318685100001
View details for PubMedID 23537350
View details for PubMedCentralID PMC3651252
Simulation Exercises as a Patient Safety Strategy A Systematic Review
ANNALS OF INTERNAL MEDICINE
2013; 158 (5): 426-?
Simulation is a versatile technique used in a variety of health care settings for a variety of purposes, but the extent to which simulation may improve patient safety remains unknown. This systematic review examined evidence on the effects of simulation techniques on patient safety outcomes. PubMed and the Cochrane Library were searched from their beginning to 31 October 2012 to identify relevant studies. A single reviewer screened 913 abstracts and selected and abstracted data from 38 studies that reported outcomes during care of real patients after patient-, team-, or system-level simulation interventions. Studies varied widely in the quality of methodological design and description of simulation activities, but in general, simulation interventions improved the technical performance of individual clinicians and teams during critical events and complex procedures. Limited evidence suggested improvements in patient outcomes attributable to simulation exercises at the health system level. Future studies would benefit from standardized reporting of simulation components and identification of robust patient safety targets.
View details for Web of Science ID 000316058600010
View details for PubMedID 23460100
Patient Safety Strategies Targeted at Diagnostic Errors A Systematic Review
ANNALS OF INTERNAL MEDICINE
2013; 158 (5): 381-?
Missed, delayed, or incorrect diagnosis can lead to inappropriate patient care, poor patient outcomes, and increased cost. This systematic review analyzed evaluations of interventions to prevent diagnostic errors. Searches used MEDLINE (1966 to October 2012), the Agency for Healthcare Research and Quality's Patient Safety Network, bibliographies, and prior systematic reviews. Studies that evaluated any intervention to decrease diagnostic errors in any clinical setting and with any study design were eligible, provided that they addressed a patient-related outcome. Two independent reviewers extracted study data and rated study quality. There were 109 studies that addressed 1 or more intervention categories: personnel changes (n = 6), educational interventions (n = 11), technique (n = 23), structured process changes (n = 27), technology-based systems interventions (n = 32), and review methods (n = 38). Of 14 randomized trials, which were rated as having mostly low to moderate risk of bias, 11 reported interventions that reduced diagnostic errors. Evidence seemed strongest for technology-based systems (for example, text message alerting) and specific techniques (for example, testing equipment adaptations). Studies provided no information on harms, cost, or contextual application of interventions. Overall, the review showed a growing field of diagnostic error research and categorized and identified promising interventions that warrant evaluation in large studies across diverse settings.
View details for Web of Science ID 000316058600004
View details for PubMedID 23460094
Determinants of Adverse Events in Vascular Surgery
JOURNAL OF THE AMERICAN COLLEGE OF SURGEONS
2012; 214 (5): 788-797
Patient safety is a national priority. Patient Safety Indicators (PSIs) monitor potential adverse events during hospital stays. Surgical specialty PSI benchmarks do not exist, and are needed to account for differences in the range of procedures performed, reasons for the procedure, and differences in patient characteristics. A comprehensive profile of adverse events in vascular surgery was created.The Nationwide Inpatient Sample was queried for 8 vascular procedures using ICD-9-CM codes from 2005 to 2009. Factors associated with PSI development were evaluated in univariate and multivariate analyses.A total of 1,412,703 patients underwent a vascular procedure and a PSI developed in 5.2%. PSIs were more frequent in female, nonwhite patients with public payers (p < 0.01). Patients at mid and low-volume hospitals had greater odds of developing a PSI (odds ratio [OR] = 1.17; 95% CI, 1.10-1.23 and OR = 1.69; 95% CI, 1.53-1.87). Amputations had highest PSI risk-adjusted rate and carotid endarterectomy and endovascular abdominal aortic aneurysm repair had lower risk-adjusted rate (p < 0.0001). PSI risk-adjusted rate increased linearly by severity of patient indication: claudicants (OR = 0.40; 95% CI, 0.35-0.46), rest pain patients (OR = 0.78; 95% CI, 0.69-0.90), ulcer (OR = 1.20; 95% CI, 1.07-1.34), and gangrene patients (OR = 1.85; 95% CI, 1.66-2.06).Patient safety events in vascular surgery were high and varied by procedure, with amputations and open abdominal aortic aneurysm repair having considerably more potential adverse events. PSIs were associated with black race, public payer, and procedure indication. It is important to note the overall higher rates of PSIs occurring in vascular patients and to adjust benchmarks for this surgical specialty appropriately.
View details for DOI 10.1016/j.jamcollsurg.2012.01.045
View details for Web of Science ID 000303724200009
View details for PubMedID 22425449
Relationship between Patient Safety and Hospital Surgical Volume
HEALTH SERVICES RESEARCH
2012; 47 (2): 756-769
To examine the relationship between hospital volume and in-hospital adverse events.Patient safety indicator (PSI) was used to identify hospital-acquired adverse events in the Nationwide Inpatient Sample database in abdominal aortic aneurysm, coronary artery bypass graft, and Roux-en-Y gastric bypass from 2005 to 2008.In this observational study, volume thresholds were defined by mean year-specific terciles. PSI risk-adjusted rates were analyzed by volume tercile for each procedure.Overall, hospital volume was inversely related to preventable adverse events. High-volume hospitals had significantly lower risk-adjusted PSI rates compared to lower volume hospitals (p < .05).These data support the relationship between hospital volume and quality health care delivery in select surgical cases. This study highlights differences between hospital volume and risk-adjusted PSI rates for three common surgical procedures and highlights areas of focus for future studies to identify pathways to reduce hospital-acquired events.
View details for DOI 10.1111/j.1475-6773.2011.01310.x
View details for Web of Science ID 000301229300012
View details for PubMedID 22091561
Assessment of a Novel Hybrid Delphi and Nominal Groups Technique to Evaluate Quality Indicators
HEALTH SERVICES RESEARCH
2011; 46 (6): 2005-2018
To test the implementation of a novel structured panel process in the evaluation of quality indicators.National panel of 64 clinicians rating usefulness of indicator applications in 2008-2009.Hybrid panel combined Delphi Group and Nominal Group (NG) techniques to evaluate 81 indicator applications.The Delphi Group and NG rated 56 percent of indicator applications similarly. Group assignment (Delphi versus Nominal) was not significantly associated with mean ratings, but specialty and research interests of panelists, and indicator factors such as denominator level and proposed use were. Rating distributions narrowed significantly in 20.8 percent of applications between review rounds.The hybrid panel process facilitated information exchange and tightened rating distributions. Future assessments of this method might include a control panel.
View details for DOI 10.1111/j.1475-6773.2011.01297.x
View details for Web of Science ID 000297244000017
View details for PubMedID 21790589
Expanding the Uses of AHRQ's Prevention Quality Indicators Validity From the Clinician Perspective
2011; 49 (8): 679-685
The Agency for Healthcare Research and Quality's prevention quality indicators (PQIs) are used as a metric of area-level access to quality care. Recently, interest has expanded to using the measures at the level of payer or large physician groups, including public reporting or pay-for-performance programs. However, the validity of these expanded applications is unknown.We conducted a novel panel process to establish face validity of the 12 PQIs at 3 denominator levels: geographic area, payer, and large physician groups; and 3 uses: quality improvement, comparative reporting, and pay for performance. Sixty-four clinician panelists were split into Delphi and Nominal Groups. We aimed to capitalize on the reliability of the Delphi method and information sharing in the Nominal group method by applying these techniques simultaneously. We examined panelists' perceived usefulness of the indicators for specific uses using median scores and agreement within and between groups.Panelists showed stronger support of the usefulness of chronic disease indicators at the payer and large physician group levels than for acute disease indicators. Panelists fully supported the usefulness of 2 indicators for comparative reporting (asthma, congestive heart failure) and no indicators for pay-for-performance applications. Panelists expressed serious concerns about the usefulness of all new applications of 3 indicators (angina, perforated appendix, dehydration). Panelists rated age, current comorbidities, earlier hospitalization, and socioeconomic status as the most important risk-adjustment factors.Clinicians supported some expanded uses of the PQIs, but generally expressed reservations. Attention to denominator definitions and risk adjustment are essential for expanded use.
View details for DOI 10.1097/MLR.0b013e3182159e65
View details for Web of Science ID 000292758500001
View details for PubMedID 21478780
A framework for classifying patient safety practices: results from an expert consensus process
BMJ QUALITY & SAFETY
2011; 20 (7): 618-624
Development of a coherent literature evaluating patient safety practices has been hampered by the lack of an underlying conceptual framework. The authors describe issues and choices in describing and classifying diverse patient safety practices (PSPs).The authors developed a framework to classify PSPs by identifying and synthesising existing conceptual frameworks, evaluating the draft framework by asking a group of experts to use it to classify a diverse set of PSPs and revising the framework through an expert-panel consensus process.The 11 classification dimensions in the framework include: regulatory versus voluntary; setting; feasibility; individual activity versus organisational change; temporal (one-time vs repeated/long-term); pervasive versus targeted; common versus rare events; PSP maturity; degree of controversy/conflicting evidence; degree of behavioural change required for implementation; and sensitivity to context.This framework offers a way to classify and compare PSPs, and thereby to interpret the patient-safety literature. Further research is needed to develop understanding of these dimensions, how they evolve as the patient safety field matures, and their relative utilities in describing, evaluating and implementing PSPs.
View details for DOI 10.1136/bmjqs.2010.049296
View details for Web of Science ID 000291727100011
View details for PubMedID 21610267
How does context affect interventions to improve patient safety? An assessment of evidence from studies of five patient safety practices and proposals for research
BMJ QUALITY & SAFETY
2011; 20 (7): 604-610
Logic and experience suggest that it is easier in some situations than in others to change behaviour and organisation to improve patient safety. Knowing which 'context factors' help and hinder implementation of different changes would help implementers, as well as managers, policy makers, regulators and purchasers of healthcare. It could help to judge the likely success of possible improvements, given the conditions that they have, and to decide which of these conditions could be modified to make implementation more effective.The study presented in this paper examined research to discover any evidence reported about whether or how context factors influence the effectiveness of five patient safety interventions.The review found that, for these five diverse interventions, there was little strong evidence of the influence of different context factors. However, the research was not designed to investigate context influence.The paper suggests that significant gaps in research exist and makes proposals for future research better to inform decision-making.
View details for DOI 10.1136/bmjqs.2010.047035
View details for Web of Science ID 000291727100009
View details for PubMedID 21493589
What context features might be important determinants of the effectiveness of patient safety practice interventions?
BMJ QUALITY & SAFETY
2011; 20 (7): 611-617
Differences in contexts (eg, policies, healthcare organisation characteristics) may explain variations in the effects of patient safety practice (PSP) implementations. However, knowledge of which contextual features are important determinants of PSP effectiveness is limited and consensus is lacking on a taxonomy of which contexts matter.Iterative, formal discussions were held with a 22-member technical expert panel composed of experts or leaders in patient safety, healthcare systems, and methods. First, potentially important contextual features were identified, focusing on five PSPs. Then, two surveys were conducted to determine the context likely to influence PSP implementations.The panel reached a consensus on a taxonomy of four broad domains of contextual features important for PSP implementations: safety culture, teamwork and leadership involvement; structural organisational characteristics (eg, size, organisational complexity or financial status); external factors (eg, financial or performance incentives or PSP regulations); and availability of implementation and management tools (eg, training organisational incentives). Panelists also tended to rate specific patient safety culture, teamwork and leadership contexts as high priority for assessing their effects on PSP implementations, but tended to rate specific organisational characteristic contexts as high priority only for use in PSP evaluations. Panelists appeared split on whether specific external factors and implementation/management tools were important for assessment or only description.This work can guide research commissioners and evaluators on the contextual features of PSP implementations that are important to report or evaluate. It represents a first step towards developing guidelines on contexts in PSP implementation evaluations. However, the science of context measurement needs maturing.
View details for DOI 10.1136/bmjqs.2010.049379
View details for Web of Science ID 000291727100010
View details for PubMedID 21617166
Advancing the Science of Patient Safety
ANNALS OF INTERNAL MEDICINE
2011; 154 (10): 693-W248
Despite a decade's worth of effort, patient safety has improved slowly, in part because of the limited evidence base for the development and widespread dissemination of successful patient safety practices. The Agency for Healthcare Research and Quality sponsored an international group of experts in patient safety and evaluation methods to develop criteria to improve the design, evaluation, and reporting of practice research in patient safety. This article reports the findings and recommendations of this group, which include greater use of theory and logic models, more detailed descriptions of interventions and their implementation, enhanced explanation of desired and unintended outcomes, and better description and measurement of context and of how context influences interventions. Using these criteria and measuring and reporting contexts will improve the science of patient safety.
View details for DOI 10.1059/0003-4819-154-10-201105170-00011
View details for Web of Science ID 000290620300019
View details for PubMedID 21576538
The role of theory in research to develop and evaluate the implementation of patient safety practices
BMJ QUALITY & SAFETY
2011; 20 (5): 453-459
Theories provide a way of understanding and predicting the effects of patient safety practices (PSPs), interventions intended to prevent or mitigate harm caused by healthcare or risks of such harm. Yet most published evaluations make little or no explicit reference to theory, thereby hindering efforts to generalise findings from one context to another. Theories from a wide range of disciplines are potentially relevant to research on PSPs. Theory can be used in research to explain clinical and organisational behaviour, to guide the development and selection of PSPs, and in evaluating their implementation and mechanisms of action. One key recommendation from an expert consensus process is that researchers should describe the theoretical basis for chosen intervention components or provide an explicit logic model for 'why this PSP should work.' Future theory-driven evaluations would enhance generalisability and help build a cumulative understanding of the nature of change.
View details for DOI 10.1136/bmjqs.2010.047993
View details for Web of Science ID 000289769100012
View details for PubMedID 21317181
Systematic Review: Benefits and Harms of In-Hospital Use of Recombinant Factor VIIa for Off-Label Indications
ANNALS OF INTERNAL MEDICINE
2011; 154 (8): 529-W190
Recombinant factor VIIa (rFVIIa), a hemostatic agent approved for hemophilia, is increasingly used for off-label indications.To evaluate the benefits and harms of rFVIIa use for 5 off-label, in-hospital indications: intracranial hemorrhage, cardiac surgery, trauma, liver transplantation, and prostatectomy.Ten databases (including PubMed, EMBASE, and the Cochrane Library) queried from inception through December 2010. Articles published in English were analyzed.Two reviewers independently screened titles and abstracts to identify clinical use of rFVIIa for the selected indications and identified all randomized, controlled trials (RCTs) and observational studies for full-text review.Two reviewers independently assessed study characteristics and rated study quality and indication-wide strength of evidence.16 RCTs, 26 comparative observational studies, and 22 noncomparative observational studies met inclusion criteria. Identified comparators were limited to placebo (RCTs) or usual care (observational studies). For intracranial hemorrhage, mortality was not improved with rFVIIa use across a range of doses. Arterial thromboembolism was increased with medium-dose rFVIIa use (risk difference [RD], 0.03 [95% CI, 0.01 to 0.06]) and high-dose rFVIIa use (RD, 0.06 [CI, 0.01 to 0.11]). For adult cardiac surgery, there was no mortality difference, but there was an increased risk for thromboembolism (RD, 0.05 [CI, 0.01 to 0.10]) with rFVIIa. For body trauma, there were no differences in mortality or thromboembolism, but there was a reduced risk for the acute respiratory distress syndrome (RD, -0.05 [CI, -0.02 to -0.08]). Mortality was higher in observational studies than in RCTs.The amount and strength of evidence were low for most outcomes and indications. Publication bias could not be excluded.Limited available evidence for 5 off-label indications suggests no mortality reduction with rFVIIa use. For some indications, it increases thromboembolism.
View details for PubMedID 21502651
Comparison of Thromboembolic Event Rates in Randomized Controlled Trials and Observational Studies of Recombinant Factor VIIa for Off-Label Indications.
51st Annual Meeting and Exposition of the American-Society-of-Hematology
AMER SOC HEMATOLOGY. 2009: 571–72
View details for Web of Science ID 000272725801583
THE INFLUENCE OF ECONOMIC INCENTIVES AND REGULATORY FACTORS ON THE ADOPTION OF TREATMENT TECHNOLOGIES: A CASE STUDY OF TECHNOLOGIES USED TO TREAT HEART ATTACKS
2009; 18 (10): 1114-1132
The Technological Change in Health Care Research Network collected unique patient-level data on three procedures for treatment of heart attack patients (catheterization, coronary artery bypass grafts and percutaneous transluminal coronary angioplasty) for 17 countries over a 15-year period to examine the impact of economic and institutional factors on technology adoption. Specific institutional factors are shown to be important to the uptake of these technologies. Health-care systems characterized as public contract systems and reimbursement systems have higher adoption rates than public-integrated health-care systems. Central control of funding of investments is negatively associated with adoption rates and the impact is of the same magnitude as the overall health-care system classification. GDP per capita also has a strong role in initial adoption. The impact of income and institutional characteristics on the utilization rates of the three procedures diminishes over time.
View details for DOI 10.1002/hec.1417
View details for Web of Science ID 000269942100002
View details for PubMedID 18972326
View details for PubMedCentralID PMC2740812
USING THE MEDICAID ANALYTIC EXTRACT (MAX) TO IDENTIFY THE HCBS (ASSISTED LIVING) WAIVER POPULATION
OXFORD UNIV PRESS INC. 2009: 429–429
View details for Web of Science ID 000271794200110
Systematic Review: Elective Induction of Labor Versus Expectant Management of Pregnancy
ANNALS OF INTERNAL MEDICINE
2009; 151 (4): 252-W63
The rates of induction of labor and elective induction of labor are increasing. Whether elective induction of labor improves outcomes or simply leads to greater complications and health care costs is commonly debated in the literature.To compare the benefits and harms of elective induction of labor and expectant management of pregnancy.MEDLINE (through February 2009), Web of Science, CINAHL, Cochrane Central Register of Controlled Trials (through March 2009), bibliographies of included studies, and previous systematic reviews.Experimental and observational studies of elective induction of labor reported in English.Two authors abstracted study design; patient characteristics; quality criteria; and outcomes, including cesarean delivery and maternal and neonatal morbidity.Of 6117 potentially relevant articles, 36 met inclusion criteria: 11 randomized, controlled trials (RCTs) and 25 observational studies. Overall, expectant management of pregnancy was associated with a higher odds ratio (OR) of cesarean delivery than was elective induction of labor (OR, 1.22 [95% CI, 1.07 to 1.39]; absolute risk difference, 1.9 percentage points [CI, 0.2 to 3.7 percentage points]) in 9 RCTs. Women at or beyond 41 completed weeks of gestation who were managed expectantly had a higher risk for cesarean delivery (OR, 1.21 [CI, 1.01 to 1.46]), but this difference was not statistically significant in women at less than 41 completed weeks of gestation (OR, 1.73 [CI, 0.67 to 4.5]). Women who were expectantly managed were more likely to have meconium-stained amniotic fluid than those who were electively induced (OR, 2.04 [CI, 1.34 to 3.09]). Limitations: There were no recent RCTs of elective induction of labor at less than 41 weeks of gestation. The 2 studies conducted at less than 41 weeks of gestation were of poor quality and were not generalizable to current practice.RCTs suggest that elective induction of labor at 41 weeks of gestation and beyond is associated with a decreased risk for cesarean delivery and meconium-stained amniotic fluid. There are concerns about the translation of these findings into actual practice; thus, future studies should examine elective induction of labor in settings where most obstetric care is provided.
View details for PubMedID 19687492
Approach to Improving Quality: the Role of Quality Measurement and a Case Study of the Agency for Healthcare Research and Quality Pediatric Quality Indicators
PEDIATRIC CLINICS OF NORTH AMERICA
2009; 56 (4): 815-?
Data and well-constructed measures quantify suboptimal quality in health care and play a crucial role in improving quality. Measures are useful for three major purposes: (1) driving improvements in outcomes of care by prioritizing and selecting appropriate interventions, (2) developing comparative quality reports for consumer and payer decision making and health system accountability, and (3) creating incentives that pay for performance. This article describes the current landscape for measurement in pediatrics compared to adult care, provides a case study of the development and application of a publicly available and federally funded pediatric indicator set using routinely collected hospital discharge data, and addresses challenges and opportunities in selecting and using measures as a function of intended purpose.
View details for DOI 10.1016/j.pcl.2009.05.009
View details for Web of Science ID 000269933000009
View details for PubMedID 19660629
Inequality in treatment use among elderly patients with acute myocardial infarction: USA, Belgium and Quebec
BMC HEALTH SERVICES RESEARCH
Previous research has provided evidence that socioeconomic status has an impact on invasive treatments use after acute myocardial infarction. In this paper, we compare the socioeconomic inequality in the use of high-technology diagnosis and treatment after acute myocardial infarction between the US, Quebec and Belgium paying special attention to financial incentives and regulations as explanatory factors.We examined hospital-discharge abstracts for all patients older than 65 who were admitted to hospitals during the 1993-1998 period in the US, Quebec and Belgium with a primary diagnosis of acute myocardial infarction. Patients' income data were imputed from the median incomes of their residential area. For each country, we compared the risk-adjusted probability of undergoing each procedure between socioeconomic categories measured by the patient's area median income.Our findings indicate that income-related inequality exists in the use of high-technology treatment and diagnosis techniques that is not justified by differences in patients' health characteristics. Those inequalities are largely explained, in the US and Quebec, by inequalities in distances to hospitals with on-site cardiac facilities. However, in both Belgium and the US, inequalities persist among patients admitted to hospitals with on-site cardiac facilities, rejecting the hospital location effect as the single explanation for inequalities. Meanwhile, inequality levels diverge across countries (higher in the US and in Belgium, extremely low in Quebec).The findings support the hypothesis that income-related inequality in treatment for AMI exists and is likely to be affected by a country's system of health care.
View details for DOI 10.1186/1472-6963-9-130
View details for Web of Science ID 000269526100002
View details for PubMedID 19643011
View details for PubMedCentralID PMC3277323
Quality Improvement Strategies for Children With Asthma A Systematic Review
ARCHIVES OF PEDIATRICS & ADOLESCENT MEDICINE
2009; 163 (6): 572-581
To evaluate the evidence that quality improvement (QI) strategies can improve the processes and outcomes of outpatient pediatric asthma care.Cochrane Effective Practice and Organisation of Care Group database (January 1966 to April 2006), MEDLINE (January 1966 to April 2006), Cochrane Consumers and Communication Group database (January 1966 to May 2006), and bibliographies of retrieved articles.Randomized controlled trials, controlled before-after trials, or interrupted time series trials of English-language QI evaluations.Must have included 1 or more QI strategies for the outpatient management of children with asthma.Clinical status (eg, spirometric measures); functional status (eg, days lost from school); and health services use (eg, hospital admissions).Seventy-nine studies met inclusion criteria: 69 included at least some component of patient education, self-monitoring, or self-management; 13 included some component of organizational change; and 7 included provider education. Self-management interventions increased symptom-free days by approximately 10 days/y (P = .02) and reduced school absenteeism by about 0.1 day/mo (P = .03). Interventions of provider education and those that incorporated organizational changes were likely to report improvements in medication use. Quality improvement interventions that provided multiple educational sessions, had longer durations, and used combinations of instructional modalities were more likely to result in improvements for patients than interventions lacking these characteristics.A variety of QI interventions improve the outcomes and processes of care for children with asthma. Use of similar outcome measures and thorough descriptions of interventions would advance the study of QI for pediatric asthma care.
View details for Web of Science ID 000266566700011
View details for PubMedID 19487615
CABG versus PCl for multivessel coronary artery disease Reply
2009; 373 (9682): 2200-2200
View details for Web of Science ID 000267444500029
Quality Improvement Strategies for Children With Asthma A Systematic Review
ARCHIVES OF PEDIATRICS & ADOLESCENT MEDICINE
2009; 163 (6): E1-E5
View details for Web of Science ID 000266566700016
Coronary artery bypass surgery compared with percutaneous coronary interventions for multivessel disease: a collaborative analysis of individual patient data from ten randomised trials
2009; 373 (9670): 1190-1197
Coronary artery bypass graft (CABG) and percutaneous coronary intervention (PCI) are alternative treatments for multivessel coronary disease. Although the procedures have been compared in several randomised trials, their long-term effects on mortality in key clinical subgroups are uncertain. We undertook a collaborative analysis of data from randomised trials to assess whether the effects of the procedures on mortality are modified by patient characteristics.We pooled individual patient data from ten randomised trials to compare the effectiveness of CABG with PCI according to patients' baseline clinical characteristics. We used stratified, random effects Cox proportional hazards models to test the effect on all-cause mortality of randomised treatment assignment and its interaction with clinical characteristics. All analyses were by intention to treat.Ten participating trials provided data on 7812 patients. PCI was done with balloon angioplasty in six trials and with bare-metal stents in four trials. Over a median follow-up of 5.9 years (IQR 5.0-10.0), 575 (15%) of 3889 patients assigned to CABG died compared with 628 (16%) of 3923 patients assigned to PCI (hazard ratio [HR] 0.91, 95% CI 0.82-1.02; p=0.12). In patients with diabetes (CABG, n=615; PCI, n=618), mortality was substantially lower in the CABG group than in the PCI group (HR 0.70, 0.56-0.87); however, mortality was similar between groups in patients without diabetes (HR 0.98, 0.86-1.12; p=0.014 for interaction). Patient age modified the effect of treatment on mortality, with hazard ratios of 1.25 (0.94-1.66) in patients younger than 55 years, 0.90 (0.75-1.09) in patients aged 55-64 years, and 0.82 (0.70-0.97) in patients 65 years and older (p=0.002 for interaction). Treatment effect was not modified by the number of diseased vessels or other baseline characteristics.Long-term mortality is similar after CABG and PCI in most patient subgroups with multivessel coronary artery disease, so choice of treatment should depend on patient preferences for other outcomes. CABG might be a better option for patients with diabetes and patients aged 65 years or older because we found mortality to be lower in these subgroups.
View details for DOI 10.1016/S0140-6736(09)60552-3
View details for PubMedID 19303634
Maternal and neonatal outcomes of elective induction of labor.
Evidence report/technology assessment
Induction of labor is on the rise in the U.S., increasing from 9.5 percent in 1990 to 22.1 percent in 2004. Although, it is not entirely clear what proportion of these inductions are elective (i.e. without a medical indication), the overall rate of induction of labor is rising faster than the rate of pregnancy complications that would lead to a medically indicated induction. However, the maternal and neonatal effects of induction of labor are unclear. Many studies compare women with induction of labor to those in spontaneous labor. This is problematic, because at any point in the management of the woman with a term gestation, the clinician has the choice between induction of labor and expectant management, not spontaneous labor. Expectant management of the pregnancy involves nonintervention at any particular point in time and allowing the pregnancy to progress to a future gestational age. Thus, women undergoing expectant management may go into spontaneous labor or may require indicated induction of labor at a future gestational age.The Stanford-UCSF Evidence-Based Practice Center examined the evidence regarding four Key Questions: What evidence describes the maternal risks of elective induction versus expectant management? What evidence describes the fetal/neonatal risks of elective induction versus expectant management? What is the evidence that certain physical conditions/patient characteristics are predictive of a successful induction of labor? How is a failed induction defined?We performed a systematic review to answer the Key Questions. We searched MEDLINE(1966-2007) and bibliographies of prior systematic reviews and the included studies for English language studies of maternal and fetal outcomes after elective induction of labor. We evaluated the quality of included studies. When possible, we synthesized study data using random effects models. We also evaluated the potential clinical outcomes and cost-effectiveness of elective induction of labor versus expectant management of pregnancy labor at 41, 40, and 39 weeks' gestation using decision-analytic models.Our searches identified 3,722 potentially relevant articles, of which 76 articles met inclusion criteria. Nine RCTs compared expectant management with elective induction of labor. We found that overall, expectant management of pregnancy was associated with an approximately 22 percent higher odds of cesarean delivery than elective induction of labor (OR 1.22, 95 percent CI 1.07-1.39; absolute risk difference 1.9, 95 percent CI: 0.2-3.7 percent). The majority of these studies were in women at or beyond 41 weeks of gestation (OR 1.21, 95 percent CI 1.01-1.46). In studies of women at or beyond 41 weeks of gestation, the evidence was rated as moderate because of the size and number of studies and consistency of the findings. Among women less than 41 weeks of gestation, there were three trials which reported no difference in risk of cesarean delivery among women who were induced as compared to expectant management (OR 1.73; 95 percent CI: 0.67-4.5, P=0.26), but all of these trials were small, non-U.S., older, and of poor quality. When we stratified the analysis by country, we found that the odds of cesarean delivery were higher in women who were expectantly managed compared to elective induction of labor in studies conducted outside the U.S. (OR 1.22; 95 percent CI 1.05-1.40) but were not statistically different in studies conducted in the U.S. (OR 1.28; 95 percent CI 0.65-2.49). Women who were expectantly managed were also more likely to have meconium-stained amniotic fluid than those who were electively induced (OR 2.04; 95 percent CI: 1.34-3.09). Observational studies reported a consistently lower risk of cesarean delivery among women who underwent spontaneous labor (6 percent) compared with women who had an elective induction of labor (8 percent) with a statistically significant decrease when combined (OR 0.63; 95 percent CI: 0.49-0.79), but again utilized the wrong control group and did not appropriately adjust for gestational age. We found moderate to high quality evidence that increased parity, a more favorable cervical status as assessed by a higher Bishop score, and decreased gestational age were associated with successful labor induction (58 percent of the included studies defined success as achieving a vaginal delivery anytime after the onset of the induction of labor; in these instances, induction was considered a failure when it led to a cesarean delivery). In the decision analytic model, we utilized a baseline assumption of no difference in cesarean delivery between the two arms as there was no statistically significant difference in the U.S. studies or in women prior to 41 0/7 weeks of gestation. In each of the models, women who were electively induced had better overall outcomes among both mothers and neonates as estimated by total quality-adjusted life years (QALYs) as well as by reduction in specific perinatal outcomes such as shoulder dystocia, meconium aspiration syndrome, and preeclampsia. Additionally, induction of labor was cost-effective at $10,789 per QALY with elective induction of labor at 41 weeks of gestation, $9,932 per QALY at 40 weeks of gestation, and $20,222 per QALY at 39 weeks of gestation utilizing a cost-effectiveness threshold of $50,000 per QALY. At 41 weeks of gestation, these results were generally robust to variations in the assumed ranges in univariate and multi-way sensitivity analyses. However, the findings of cost-effectiveness at 40 and 39 weeks of gestation were not robust to the ranges of the assumptions. In addition, the strength of evidence for some model inputs was low, therefore our analyses are exploratory rather than definitive.Randomized controlled trials suggest that elective induction of labor at 41 weeks of gestation and beyond may be associated with a decrease in both the risk of cesarean delivery and of meconium-stained amniotic fluid. The evidence regarding elective induction of labor prior to 41 weeks of gestation is insufficient to draw any conclusion. There is a paucity of information from prospective RCTs examining other maternal or neonatal outcomes in the setting of elective induction of labor. Observational studies found higher rates of cesarean delivery with elective induction of labor, but compared women undergoing induction of labor to women in spontaneous labor and were subject to potential confounding bias, particularly from gestational age. Such studies do not inform the question of how elective induction of labor affects maternal or neonatal outcomes. Elective induction of labor at 41 weeks of gestation and potentially earlier also appears to be a cost-effective intervention, but because of the need for further data to populate these models our analyses are not definitive. Despite the evidence from the prospective, RCTs reported above, there are concerns about the translation of such findings into actual practice, thus, there is a great need for studying the translation of such research into settings where the majority of obstetric care is provided.
View details for PubMedID 19408970
Isolated Disease of the Proximal Left Anterior Descending Artery Comparing the Effectiveness of Percutaneous Coronary Interventions and Coronary Artery Bypass Surgery
2008; 1 (5): 483-491
This study sought to systematically compare the effectiveness of percutaneous coronary intervention and coronary artery bypass surgery in patients with single-vessel disease of the proximal left anterior descending (LAD) coronary artery.It is uncertain whether percutaneous coronary interventions (PCI) or coronary artery bypass grafting (CABG) surgery provides better clinical outcomes among patients with single-vessel disease of the proximal LAD.We searched relevant databases (MEDLINE, EMBASE, and Cochrane from 1966 to 2006) to identify randomized controlled trials that compared outcomes for patients with single-vessel proximal LAD assigned to either PCI or CABG.We identified 9 randomized controlled trials that enrolled a total of 1,210 patients (633 received PCI and 577 received CABG). There were no differences in survival at 30 days, 1 year, or 5 years, nor were there differences in the rates of procedural strokes or myocardial infarctions, whereas the rate of repeat revascularization was significantly less after CABG than after PCI (at 1 year: 7.3% vs. 19.5%; at 5 years: 7.3% vs. 33.5%). Angina relief was significantly greater after CABG than after PCI (at 1 year: 95.5% vs. 84.6%; at 5 years: 84.2% vs. 75.6%). Patients undergoing CABG spent 3.2 more days in the hospital than those receiving PCI (95% confidence interval: 2.3 to 4.1 days, p < 0.0001), required more transfusions, and were more likely to have arrhythmias immediately post-procedure.In patients with single-vessel, proximal LAD disease, survival was similar in CABG-assigned and PCI-assigned patients; CABG was significantly more effective in relieving angina and led to fewer repeat revascularizations.
View details for DOI 10.1016/j.jcin.2008.07.001
View details for PubMedID 19463349
Preliminary assessment of pediatric health care quality and patient safety in the United States using readily available administrative data
2008; 122 (2): E416-E425
With >6 million hospital stays, costing almost $50 billion annually, hospitalized children represent an important population for which most inpatient quality indicators are not applicable. Our aim was to develop indicators using inpatient administrative data to assess aspects of the quality of inpatient pediatric care and access to quality outpatient care.We adapted the Agency for Healthcare Research and Quality quality indicators, a publicly available set of measurement tools refined previously by our team, for a pediatric population. We systematically reviewed the literature for evidence regarding coding and construct validity specific to children. We then convened 4 expert panels to review and discuss the evidence and asked them to rate each indicator through a 2-stage modified Delphi process. From the 2000 and 2003 Agency for Healthcare Research and Quality Healthcare Cost and Utilization Project Kids' Inpatient Database, we generated national estimates for provider level indicators and for area level indicators.Panelists recommended 18 indicators for inclusion in the pediatric quality indicator set based on overall usefulness for quality improvement efforts. The indicators included 13 hospital-level indicators, including 11 based on complications, 1 based on mortality, and 1 based on volume, as well as 5 area-level potentially preventable hospitalization indicators. National rates for all 18 of the indicators varied minimally between years. Rates in high-risk strata are notably higher than in the overall groups: in 2003 the decubitus ulcer pediatric quality indicator rate was 3.12 per 1000, whereas patients with limited mobility experienced a rate of 22.83. Trends in rates by age varied across pediatric quality indicators: short-term complications of diabetes increased with age, whereas admissions for gastroenteritis decreased with age.Tracking potentially preventable complications and hospitalizations has the potential to help prioritize quality improvement efforts at both local and national levels, although additional validation research is needed to confirm the accuracy of coding.
View details for DOI 10.1542/peds.2007-2477
View details for Web of Science ID 000258142500062
View details for PubMedID 18676529
Modeling the logistics of response to anthrax bioterrorism
MEDICAL DECISION MAKING
2008; 28 (3): 332-350
A bioterrorism attack with an agent such as anthrax will require rapid deployment of medical and pharmaceutical supplies to exposed individuals. How should such a logistical system be organized? How much capacity should be built into each element of the bioterrorism response supply chain?The authors developed a compartmental model to evaluate the costs and benefits of various strategies for preattack stockpiling and postattack distribution and dispensing of medical and pharmaceutical supplies, as well as the benefits of rapid attack detection.The authors show how the model can be used to address a broad range of logistical questions as well as related, nonlogistical questions (e.g., the cost-effectiveness of strategies to improve patient adherence to antibiotic regimens). They generate several key insights about appropriate strategies for local communities. First, stockpiling large local inventories of medical and pharmaceutical supplies is unlikely to be the most effective means of reducing mortality from an attack, given the availability of national and regional supplies. Instead, communities should create sufficient capacity for dispensing prophylactic antibiotics in the event of a large-scale bioterror attack. Second, improved surveillance systems can significantly reduce deaths from such an attack but only if the local community has sufficient antibiotic-dispensing capacity. Third, mortality from such an attack is significantly affected by the number of unexposed individuals seeking prophylaxis and treatment. Fourth, full adherence to treatment regimens is critical for reducing expected mortality.Effective preparation for response to potential bioterror attacks can avert deaths in the event of an attack. Models such as this one can help communities more effectively prepare for response to potential bioterror attacks.
View details for DOI 10.1177/0272989X07312721
View details for Web of Science ID 000256264500006
View details for PubMedID 18349432
Implementing Effective Hypertension Quality Improvement Strategies: Barriers and Potential Solutions
JOURNAL OF CLINICAL HYPERTENSION
2008; 10 (4): 311-316
Many quality improvement strategies have focused on improving blood pressure control, and these strategies can target the patient, the provider, and/or the system. Strategies that seem to have the biggest effect on blood pressure outcomes are team change, patient education, facilitated relay of clinical information, and promotion of self-management. Barriers to effective blood pressure control can affect the patient, the physician, the system, and/or "cues to action."We review the barriers to achieving blood pressure control and describe current and potential creative strategies for optimizing blood pressure control. These include home-based disease management, combined patient and provider education, and automatic decision support systems. Future research must address which components of quality improvement interventions are most successful in achieving blood pressure control.
View details for Web of Science ID 000261099600008
View details for PubMedID 18401229
- Modeling the logistics of response to anthrax bioterrorism. Med Decis Making 2008; 28 (3): 332-50
- Implementing effective hypertension quality improvement strategies: barriers and potential solutions J Clin Hypertens 2008; 10 (4): 311-6
Systematic review: The comparative effectiveness of percutaneous coronary interventions and coronary artery bypass graft surgery
ANNALS OF INTERNAL MEDICINE
2007; 147 (10): 703-U139
The comparative effectiveness of coronary artery bypass graft (CABG) surgery and percutaneous coronary intervention (PCI) for patients in whom both procedures are feasible remains poorly understood.To compare the effectiveness of PCI and CABG in patients for whom coronary revascularization is clinically indicated.MEDLINE, EMBASE, and Cochrane databases (1966-2006); conference proceedings; and bibliographies of retrieved articles.Randomized, controlled trials (RCTs) reported in any language that compared clinical outcomes of PCI with those of CABG, and selected observational studies.Information was extracted on study design, sample characteristics, interventions, and clinical outcomes.The authors identified 23 RCTs in which 5019 patients were randomly assigned to PCI and 4944 patients were randomly assigned to CABG. The difference in survival after PCI or CABG was less than 1% over 10 years of follow-up. Survival did not differ between PCI and CABG for patients with diabetes in the 6 trials that reported on this subgroup. Procedure-related strokes were more common after CABG than after PCI (1.2% vs. 0.6%; risk difference, 0.6%; P = 0.002). Angina relief was greater after CABG than after PCI, with risk differences ranging from 5% to 8% at 1 to 5 years (P < 0.001). The absolute rates of angina relief at 5 years were 79% after PCI and 84% after CABG. Repeated revascularization was more common after PCI than after CABG (risk difference, 24% at 1 year and 33% at 5 years; P < 0.001); the absolute rates at 5 years were 46.1% after balloon angioplasty, 40.1% after PCI with stents, and 9.8% after CABG. In the observational studies, the CABG-PCI hazard ratio for death favored PCI among patients with the least severe disease and CABG among those with the most severe disease.The RCTs were conducted in leading centers in selected patients. The authors could not assess whether comparative outcomes vary according to clinical factors, such as extent of coronary disease, ejection fraction, or previous procedures. Only 1 small trial used drug-eluting stents.Compared with PCI, CABG was more effective in relieving angina and led to fewer repeated revascularizations but had a higher risk for procedural stroke. Survival to 10 years was similar for both procedures.
View details for PubMedID 17938385
Inhalational, gastrointestinal, and cutaneous anthrax in children
ARCHIVES OF PEDIATRICS & ADOLESCENT MEDICINE
2007; 161 (9): 896-905
To systematically review all published case reports of children with anthrax to evaluate the predictors of disease progression and mortality.Fourteen selected journal indexes (1900-1966), MEDLINE (1966-2005), and the bibliographies of all retrieved articles.Case reports (any language) of anthrax in persons younger than 18 years published between January 1, 1900, and December 31, 2005. Main Exposures Cases with symptoms and culture or Gram stain or autopsy evidence of anthrax infection.Disease progression, treatment responses, and mortality.Of 2499 potentially relevant articles, 73 case reports of pediatric anthrax (5 inhalational cases, 22 gastrointestinal cases, 37 cutaneous cases, 6 cases of primary meningoencephalitis, and 3 atypical cases) met the inclusion criteria. Only 10% of the patients were younger than 2 years, and 24% were girls. Of the few children with inhalational anthrax, none had nonheadache neurologic symptoms, a key finding that distinguishes adult inhalational anthrax from more common illnesses, such as influenza. Overall, observed mortality was 60% (3 of 5) for inhalational anthrax, 65% (13 of 20) for gastrointestinal anthrax, 14% (5 of 37) for cutaneous anthrax, and 100% (6 of 6) for primary meningoencephalitis. Nineteen of the 30 children (63%) who received penicillin-based antibiotics survived, and 9 of the 11 children (82%) who received anthrax antiserum survived.The clinical presentation of children with anthrax is varied. The mortality rate is high in children with inhalational anthrax, gastrointestinal anthrax, and anthrax meningoencephalitis. Rapid diagnosis and effective treatment of anthrax in children requires recognition of the broad spectrum of clinical presentations of pediatric anthrax.
View details for PubMedID 17768291
- Why rescue the administrative data version of the "failure to rescue" quality indicator MEDICAL CARE 2007; 45 (4): 277-279
Comparative effectiveness of percutaneous coronary interventions and coronary artery bypass grafting for coronary artery disease
30th Annual Meeting of the Society-of-General-Internal-Medicine
SPRINGER. 2007: 47–47
View details for Web of Science ID 000251610700159
Quality improvement strategies for type 2 diabetes - Reply
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
2006; 296 (22): 2681-2681
View details for Web of Science ID 000242765700019
Pediatric anthrax: implications for bioterrorism preparedness.
Evidence report/technology assessment
To systematically review the literature about children with anthrax to describe their clinical course, treatment responses, and the predictors of disease progression and mortality.MEDLINE (1966-2005), 14 selected journal indexes (1900-1966) and bibliographies of all retrieved articles.We sought case reports of pediatric anthrax published between 1900 and 2005 meeting predefined criteria. We abstracted three types of data from the English-language reports: (1) Patient information (e.g., age, gender, nationality), (2) symptom and disease progression information (e.g., whether the patient developed meningitis); (3) treatment information (e.g., treatments received, year of treatment). We compared the clinical symptoms and disease progression variables for the pediatric cases with data on adult anthrax cases reviewed previously.We identified 246 titles of potentially relevant articles from our MEDLINE(R) search and 2253 additional references from our manual search of the bibliographies of retrieved articles and the indexes of the 14 selected journals. We included 62 case reports of pediatric anthrax including two inhalational cases, 20 gastrointestinal cases, 37 cutaneous cases, and three atypical cases. Anthrax is a relatively common and historically well-recognized disease and yet rarely reported among children, suggesting the possibility of significant under-diagnosis, underreporting, and/or publication bias. Children with anthrax present with a wide range of clinical signs and symptoms, which differ somewhat from the presenting features of adults with anthrax. Like adults, children with gastrointestinal anthrax have two distinct clinical presentations: Upper tract disease characterized by dysphagia and oropharyngeal findings and lower tract disease characterized by fever, abdominal pain, and nausea and vomiting. Additionally, children with inhalational disease may have "atypical" presentations including primary meningoencephalitis. Children with inhalational anthrax have abnormal chest roentgenograms; however, children with other forms of anthrax usually have normal roentgenograms. Nineteen of the 30 children (63%) who received penicillin-based antibiotics survived; whereas nine of 11 children (82%) who received anthrax antiserum survived.There is a broad spectrum of clinical signs and symptoms associated with pediatric anthrax. The limited data available regarding disease progression and treatment responses for children infected with anthrax suggest some differences from adult populations. Preparedness planning efforts should specifically address the needs of pediatric victims.
View details for PubMedID 17764208
Effects of quality improvement strategies for type 2 diabetes on glycemic control - A meta-regression analysis
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
2006; 296 (4): 427-440
There have been numerous reports of interventions designed to improve the care of patients with diabetes, but the effectiveness of such interventions is unclear.To assess the impact on glycemic control of 11 distinct strategies for quality improvement (QI) in adults with type 2 diabetes.MEDLINE (1966-April 2006) and the Cochrane Collaboration's Effective Practice and Organisation of Care Group database, which covers multiple bibliographic databases. Eligible studies included randomized or quasi-randomized controlled trials and controlled before-after studies that evaluated a QI intervention targeting some aspect of clinician behavior or organizational change and reported changes in glycosylated hemoglobin (HbA1c) values.Postintervention difference in HbA1c values were estimated using a meta-regression model that included baseline glycemic control and other key intervention and study features as predictors.Fifty randomized controlled trials, 3 quasi-randomized trials, and 13 controlled before-after trials met all inclusion criteria. Across these 66 trials, interventions reduced HbA(1c) values by a mean of 0.42% (95% confidence interval [CI], 0.29%-0.54%) over a median of 13 months of follow-up. Trials with fewer patients than the median for all included trials reported significantly greater effects than did larger trials (0.61% vs 0.27%, P = .004), strongly suggesting publication bias. Trials with mean baseline HbA1c values of 8.0% or greater also reported significantly larger effects (0.54% vs 0.20%, P = .005). Adjusting for these effects, 2 of the 11 categories of QI strategies were associated with reductions in HbA(1c) values of at least 0.50%: team changes (0.67%; 95% CI, 0.43%-0.91%; n = 26 trials) and case management (0.52%; 95% CI, 0.31%-0.73%; n = 26 trials); these also represented the only 2 strategies conferring significant incremental reductions in HbA1c values. Interventions involving team changes reduced values by 0.33% more (95% CI, 0.12%-0.54%; P = .004) than those without this strategy, and those involving case management reduced values by 0.22% more (95% CI, 0.00%-0.44%; P = .04) than those without case management. Interventions in which nurse or pharmacist case managers could make medication adjustments without awaiting physician authorization reduced values by 0.80% (95% CI, 0.51%-1.10%), vs only 0.32% (95% CI, 0.14%-0.49%) for all other interventions (P = .002).Most QI strategies produced small to modest improvements in glycemic control. Team changes and case management showed more robust improvements, especially for interventions in which case managers could adjust medications without awaiting physician approval. Estimates of the effectiveness of other specific QI strategies may have been limited by difficulty in classifying complex interventions, insufficient numbers of studies, and publication bias.
View details for Web of Science ID 000239242500029
View details for PubMedID 16868301
Quality improvement strategies for hypertension management - A systematic review
2006; 44 (7): 646-657
Care remains suboptimal for many patients with hypertension.The purpose of this study was to assess the effectiveness of quality improvement (QI) strategies in lowering blood pressure.MEDLINE, Cochrane databases, and article bibliographies were searched for this study.Trials, controlled before-after studies, and interrupted time series evaluating QI interventions targeting hypertension control and reporting blood pressure outcomes were studied.Two reviewers abstracted data and classified QI strategies into categories: provider education, provider reminders, facilitated relay of clinical information, patient education, self-management, patient reminders, audit and feedback, team change, or financial incentives were extracted.Forty-four articles reporting 57 comparisons underwent quantitative analysis. Patients in the intervention groups experienced median reductions in systolic blood pressure (SBP) and diastolic blood pressure (DBP) that were 4.5 mm Hg (interquartile range [IQR]: 1.5 to 11.0) and 2.1 mm Hg (IQR: -0.2 to 5.0) greater than observed for control patients. Median increases in the percentage of individuals achieving target goals for SBP and DBP were 16.2% (IQR: 10.3 to 32.2) and 6.0% (IQR: 1.5 to 17.5). Interventions that included team change as a QI strategy were associated with the largest reductions in blood pressure outcomes. All team change studies included assignment of some responsibilities to a health professional other than the patient's physician.Not all QI strategies have been assessed equally, which limits the power to compare differences in effects between strategies.QI strategies are associated with improved hypertension control. A focus on hypertension by someone in addition to the patient's physician was associated with substantial improvement. Future research should examine the contributions of individual QI strategies and their relative costs.
View details for Web of Science ID 000238806300006
View details for PubMedID 16799359
Systematic review: A century of inhalational anthrax cases from 1900 to 2005
ANNALS OF INTERNAL MEDICINE
2006; 144 (4): 270-280
Mortality from inhalational anthrax during the 2001 U.S. attack was substantially lower than that reported historically.To systematically review all published inhalational anthrax case reports to evaluate the predictors of disease progression and mortality.MEDLINE (1966-2005), 14 selected journal indexes (1900-1966), and bibliographies of all retrieved articles.Case reports (in any language) between 1900 and 2005 that met predefined criteria.Two authors (1 author for non-English-language reports) independently abstracted patient data.The authors found 106 reports of 82 cases of inhalational anthrax. Mortality was statistically significantly lower for patients receiving antibiotics or anthrax antiserum during the prodromal phase of disease, multidrug antibiotic regimens, or pleural fluid drainage. Patients in the 2001 U.S. attack were less likely to die than historical anthrax case-patients (45% vs. 92%; P < 0.001) and were more likely to receive antibiotics during the prodromal phase (64% vs. 13%; P < 0.001), multidrug regimens (91% vs. 50%; P = 0.027), or pleural fluid drainage (73% vs. 11%; P < 0.001). Patients who progressed to the fulminant phase had a mortality rate of 97% (regardless of the treatment they received), and all patients with anthrax meningoencephalitis died.This was a retrospective case review of previously published heterogeneous reports.Despite advances in supportive care, fulminant-phase inhalational anthrax is usually fatal. Initiation of antibiotic or anthrax antiserum therapy during the prodromal phase is associated with markedly improved survival, although other aspects of care, differences in clinical circumstances, or unreported factors may contribute to this observed reduction in mortality. Efforts to improve early diagnosis and timely initiation of appropriate antibiotics are critical to reducing mortality.
View details for PubMedID 16490913
Reducing mortality from anthrax bioterrorism: Strategies for stockpiling and dispensing medical and pharmaceutical supplies
BIOSECURITY AND BIOTERRORISM-BIODEFENSE STRATEGY PRACTICE AND SCIENCE
2006; 4 (3): 244-262
A critical question in planning a response to bioterrorism is how antibiotics and medical supplies should be stockpiled and dispensed. The objective of this work was to evaluate the costs and benefits of alternative strategies for maintaining and dispensing local and regional inventories of antibiotics and medical supplies for responses to anthrax bioterrorism. We modeled the regional and local supply chain for antibiotics and medical supplies as well as local dispensing capacity. We found that mortality was highly dependent on the local dispensing capacity, the number of individuals requiring prophylaxis, adherence to prophylactic antibiotics, and delays in attack detection. For an attack exposing 250,000 people and requiring the prophylaxis of 5 million people, expected mortality fell from 243,000 to 145,000 as the dispensing capacity increased from 14,000 to 420,000 individuals per day. At low dispensing capacities (<14,000 individuals per day), nearly all exposed individuals died, regardless of the rate of adherence to prophylaxis, delays in attack detection, or availability of local inventories. No benefit was achieved by doubling local inventories at low dispensing capacities; however, at higher dispensing capacities, the cost-effectiveness of doubling local inventories fell from 100,000 US dollars to 20,000 US dollars/life year gained as the annual probability of an attack increased from 0.0002 to 0.001. We conclude that because of the reportedly rapid availability of regional inventories, the critical determinant of mortality following anthrax bioterrorism is local dispensing capacity. Bioterrorism preparedness efforts directed at improving local dispensing capacity are required before benefits can be reaped from enhancing local inventories.
View details for PubMedID 16999586
Overestimation of clinical diagnostic performance caused by low necropsy rates
QUALITY & SAFETY IN HEALTH CARE
2005; 14 (6): 408-413
Diagnostic sensitivity is calculated as the number of correct diagnoses divided by the sum of correct diagnoses plus the number of missed or false negative diagnoses. Because missed diagnoses are generally detected during clinical follow up or at necropsy, the low necropsy rates seen in current practice may result in overestimates of diagnostic performance. Using three target conditions (aortic dissection, pulmonary embolism, and active tuberculosis), the prevalence of clinically missed cases among necropsied and non-necropsied deaths was estimated and the impact of low necropsy rates on the apparent sensitivity of antemortem diagnosis determined.After reviewing case series for each target condition, the most recent study that included cases first detected at necropsy was selected and the reported sensitivity of clinical diagnosis adjusted by estimating the total number of cases that would have been detected had all decedents undergone necropsy. These estimates were based on available data for necropsy rates, time period, country (US v non-US), and case mix.For all three target diagnoses, adjusting for the estimated prevalence of clinically missed cases among non-necropsied deaths produced sensitivity values outside the 95% confidence interval for the originally reported values, and well below sensitivities reported for the diagnostic tests that are usually used to detect these conditions. For active tuberculosis the sensitivity of antemortem diagnosis decreased from an apparent value of 96% to a corrected value of 83%, with a plausible range of 42-91%; for aortic dissection the sensitivity decreased from 86% to 74%; and for pulmonary embolism the reduction fell only modestly from 97% to 91% but was still lower than generally reported values of 98% or more.Failure to adjust for the prevalence of missed cases among non-necropsied deaths may substantially overstate the performance of diagnostic tests and antemortem diagnosis in general, especially for conditions with high early case fatality.
View details for DOI 10.1136/qshc.2004.011973
View details for Web of Science ID 000233686400005
View details for PubMedID 16326784
View details for PubMedCentralID PMC1744091
Challenges in systematic reviews: Synthesis of topics related to the delivery, organization, and financing of health care
ANNALS OF INTERNAL MEDICINE
2005; 142 (12): 1056-1065
Some important health policy topics, such as those related to the delivery, organization, and financing of health care, present substantial challenges to established methods for evidence synthesis. For example, such reviews may ask: What is the effect of for-profit versus not-for-profit delivery of care on patient outcomes? Or, which strategies are the most effective for promoting preventive care? This paper describes innovative methods for synthesizing evidence related to the delivery, organization, and financing of health care. We found 13 systematic reviews on these topics that described novel methodologic approaches. Several of these syntheses used 3 approaches: conceptual frameworks to inform problem formulation, systematic searches that included nontraditional literature sources, and hybrid synthesis methods that included simulations to address key gaps in the literature. As the primary literature on these topics expands, so will opportunities to develop additional novel methods for performing high-quality comprehensive syntheses.
View details for PubMedID 15968030
Impacts of informal caregiver availability on long-term care expenditures in OECD countries
HEALTH SERVICES RESEARCH
2004; 39 (6): 1971-1995
To quantify the effects of informal caregiver availability and public funding on formal long-term care (LTC) expenditures in developed countries.Secondary data were acquired for 15 Organization for Economic Cooperation and Development (OECD) countries from 1970 to 2000.Secondary data analysis, applying fixed- and random-effects models to time-series cross-sectional data. Outcome variables are inpatient or home heath LTC expenditures. Key explanatory variables are measures of the availability of informal caregivers, generosity in public funding for formal LTC, and the proportion of the elderly population in the total population.Aggregated macro data were obtained from OECD Health Data, United Nations Demographic Yearbooks, and U.S. Census Bureau International Data Base.Most of the 15 OECD countries experienced growth in LTC expenditures over the study period. The availability of a spouse caregiver, measured by male-to-female ratio among the elderly, is associated with a $28,840 (1995 U.S. dollars) annual reduction in formal LTC expenditure per additional elderly male. Availability of an adult child caregiver, measured by female labor force participation and full-time/part-time status shift, is associated with a reduction of $310 to $3,830 in LTC expenditures. These impacts on LTC expenditure vary across countries and across time within a country.The availability of an informal caregiver, particularly a spouse caregiver, is among the most important factors explaining variation in LTC expenditure growth. Long-term care policies should take into account behavioral responses: decreased public funding in LTC may lead working women to leave the labor force to provide more informal care.
View details for Web of Science ID 000226743500004
View details for PubMedID 15544640
View details for PubMedCentralID PMC1361108
Systematic review: Surveillance systems for early detection of bioterrorism-related diseases
ANNALS OF INTERNAL MEDICINE
2004; 140 (11): 910-922
Given the threat of bioterrorism and the increasing availability of electronic data for surveillance, surveillance systems for the early detection of illnesses and syndromes potentially related to bioterrorism have proliferated.To critically evaluate the potential utility of existing surveillance systems for illnesses and syndromes related to bioterrorism.Databases of peer-reviewed articles (for example, MEDLINE for articles published from January 1985 to April 2002) and Web sites of relevant government and nongovernment agencies.Reports that described or evaluated systems for collecting, analyzing, or presenting surveillance data for bioterrorism-related illnesses or syndromes.From each included article, the authors abstracted information about the type of surveillance data collected; method of collection, analysis, and presentation of surveillance data; and outcomes of evaluations of the system.17,510 article citations and 8088 government and nongovernmental Web sites were reviewed. From these, the authors included 115 systems that collect various surveillance reports, including 9 syndromic surveillance systems, 20 systems collecting bioterrorism detector data, 13 systems collecting influenza-related data, and 23 systems collecting laboratory and antimicrobial resistance data. Only the systems collecting syndromic surveillance data and detection system data were designed, at least in part, for bioterrorism preparedness applications. Syndromic surveillance systems have been deployed for both event-based and continuous bioterrorism surveillance. Few surveillance systems have been comprehensively evaluated. Only 3 systems have had both sensitivity and specificity evaluated.Data from some existing surveillance systems (particularly those developed by the military) may not be publicly available.Few surveillance systems have been specifically designed for collecting and analyzing data for the early detection of a bioterrorist event. Because current evaluations of surveillance systems for detecting bioterrorism and emerging infections are insufficient to characterize the timeliness or sensitivity and specificity, clinical and public health decision making based on these systems may be compromised.
View details for PubMedID 15172906
Regionalization of bioterrorism preparedness and response.
Evidence report/technology assessment (Summary)
View details for PubMedID 15133889
A conceptual framework for evaluating information technologies and decision support systems for bioterrorism preparedness and response
24th Annual Meeting of the Society-for-Medical-Decision-Making
SAGE PUBLICATIONS INC. 2004: 192–206
The authors sought to develop a conceptual framework for evaluating whether existing information technologies and decision support systems (IT/DSSs) would assist the key decisions faced by clinicians and public health officials preparing for and responding to bioterrorism.They reviewed reports of natural and bioterrorism related infectious outbreaks, bioterrorism preparedness exercises, and advice from experts to identify the key decisions, tasks, and information needs of clinicians and public health officials during a bioterrorism response. The authors used task decomposition to identify the subtasks and data requirements of IT/DSSs designed to facilitate a bioterrorism response. They used the results of the task decomposition to develop evaluation criteria for IT/DSSs for bioterrorism preparedness. They then applied these evaluation criteria to 341 reports of 217 existing IT/DSSs that could be used to support a bioterrorism response. Main Results: In response to bioterrorism, clinicians must make decisions in 4 critical domains (diagnosis, management, prevention, and reporting to public health), and public health officials must make decisions in 4 other domains (interpretation of bioterrorism surveillance data, outbreak investigation, outbreak control, and communication). The time horizons and utility functions for these decisions differ. From the task decomposition, the authors identified critical subtasks for each of the 8 decisions. For example, interpretation of diagnostic tests is an important subtask of diagnostic decision making that requires an understanding of the tests' sensitivity and specificity. Therefore, an evaluation criterion applied to reports of diagnostic IT/DSSs for bioterrorism asked whether the reports described the systems' sensitivity and specificity. Of the 217 existing IT/DSSs that could be used to respond to bioterrorism, 79 studies evaluated 58 systems for at least 1 performance metric.The authors identified 8 key decisions that clinicians and public health officials must make in response to bioterrorism. When applying the evaluation system to 217 currently available IT/DSSs that could potentially support the decisions of clinicians and public health officials, the authors found that the literature provides little information about the accuracy of these systems.
View details for DOI 10.1177/0272989X04263254
View details for PubMedID 15090105
Evaluating detection and diagnostic decision support systems for bioterrorism response
EMERGING INFECTIOUS DISEASES
2004; 10 (1): 100-108
We evaluated the usefulness of detection systems and diagnostic decision support systems for bioterrorism response. We performed a systematic review by searching relevant databases (e.g., MEDLINE) and Web sites for reports of detection systems and diagnostic decision support systems that could be used during bioterrorism responses. We reviewed over 24,000 citations and identified 55 detection systems and 23 diagnostic decision support systems. Only 35 systems have been evaluated: 4 reported both sensitivity and specificity, 13 were compared to a reference standard, and 31 were evaluated for their timeliness. Most evaluations of detection systems and some evaluations of diagnostic systems for bioterrorism responses are critically deficient. Because false-positive and false-negative rates are unknown for most systems, decision making on the basis of these systems is seriously compromised. We describe a framework for the design of future evaluations of such systems.
View details for Web of Science ID 000187962800016
View details for PubMedID 15078604
View details for PubMedCentralID PMC3322751
Changes in rates of autopsy-detected diagnostic errors over time - A systematic review
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
2003; 289 (21): 2849-2856
Substantial discrepanies exist between clinical diagnoses and findings at autopsy. Autopsy may be used as a tool for quality management to analyze diagnostic discrepanies.To determine the rate at which autopsies detect important, clinically missed diagnoses, and the extent to which this rate has changed over time.A systematic literature search for English-language articles available on MEDLINE from 1966 to April 2002, using the search terms autopsy, postmortem changes, post-mortem, postmortem, necropsy, and posthumous, identified 45 studies reporting 53 distinct autopsy series meeting prospectively defined criteria. Reference lists were reviewed to identify additional studies, and the final bibliography was distributed to experts in the field to identify missing or unpublished studies.Included studies reported clinically missed diagnoses involving a primary cause of death (major errors), with the most serious being those likely to have affected patient outcome (class I errors).Logistic regression was performed using data from 53 distinct autopsy series over a 40-year period and adjusting for the effects of changes in autopsy rates, country, case mix (general autopsies; adult medical; adult intensive care; adult or pediatric surgery; general pediatrics or pediatric inpatients; neonatal or pediatric intensive care; and other autopsy), and important methodological features of the primary studies.Of 53 autopsy series identified, 42 reported major errors and 37 reported class I errors. Twenty-six autopsy series reported both major and class I error rates. The median error rate was 23.5% (range, 4.1%-49.8%) for major errors and 9.0% (range, 0%-20.7%) for class I errors. Analyses of diagnostic error rates adjusting for the effects of case mix, country, and autopsy rate yielded relative decreases per decade of 19.4% (95% confidence interval [CI], 1.8%-33.8%) for major errors and 33.4% (95% [CI], 8.4%-51.6%) for class I errors. Despite these decreases, we estimated that a contemporary US institution (based on autopsy rates ranging from 100% [the extrapolated extreme at which clinical selection is eliminated] to 5% [roughly the national average]), could observe a major error rate from 8.4% to 24.4% and a class I error rate from 4.1% to 6.7%.The possibility that a given autopsy will reveal important unsuspected diagnoses has decreased over time, but remains sufficiently high that encouraging ongoing use of the autopsy appears warranted.
View details for Web of Science ID 000183205500034
View details for PubMedID 12783916
Refinement and validation of the AHRQ patient safety indicators (PSI).
26th Annual Meeting of the Society-of-General-Internal-Medicine
SPRINGER. 2003: 294–295
View details for Web of Science ID 000182564301227
A national profile of patient safety in US hospitals
2003; 22 (2): 154-166
Measures based on routinely collected data would be useful to examine the epidemiology of patient safety. Extending previous work, we established the face and consensual validity of twenty Patient Safety Indicators (PSIs). We generated a national profile of patient safety by applying these PSIs to the HCUP Nationwide Inpatient Sample. The incidence of most nonobstetric PSIs increased with age and was higher among African Americans than among whites. The adjusted incidence of most PSIs was highest at urban teaching hospitals. The PSIs may be used in AHRQ's National Quality Report, while providers may use them to screen for preventable complications, target opportunities for improvement, and benchmark performance.
View details for Web of Science ID 000181450400025
View details for PubMedID 12674418
The autopsy as an outcome and performance measure.
Evidence report/technology assessment (Summary)
View details for PubMedID 12467146
Utilization and outcomes of the implantable cardioverter defibrillator, 1987 to 1995
AMERICAN HEART JOURNAL
2002; 144 (3): 397-403
The patterns of adoption of the implantable cardioverter defibrillator (ICD) and the outcomes of its use have not been well documented in general, unselected populations. The purpose of this study was to document the impact of the ICD in widespread clinical practice.We identified ICD recipients by use of the hospital discharge databases of Medicare beneficiaries for 1987 through 1995 and of California residents for 1991 through 1995. The index admission for each patient was linked to previous and subsequent admissions and to mortality files to create a longitudinal patient profile.The rate of ICD implantations increased >10-fold between 1987 and 1995, as both the number of hospitals performing the procedure and the volume of ICD implantations per hospital rose. Mortality rates within 30 days of ICD implantation decreased from 6.0% to 1.9%, and mortality rates within 1 year fell from 19.3% to 11.4%. Surgical interventions to revise or replace the ICD within the first year remained about 5%, however, and cumulative expenditures at 1 year ($46,000-$51,000) changed very little. ICD implantation rates varied >3-fold among different regions of the United States.ICD use has expanded markedly during the study period, with improved mortality rates, but medical expenditures and rates of surgical revision remain high for ICD recipients.
View details for DOI 10.1067/mhj.2002.125496
View details for Web of Science ID 000178086800006
View details for PubMedID 12228775
Effect of risk stratification on cost-effectiveness of the implantable cardioverter defibrillator
AMERICAN HEART JOURNAL
2002; 144 (3): 440-448
Implantable cardioverter defibrillators (ICDs) effectively prevent sudden cardiac death, but selection of appropriate patients for implantation is complex. We evaluated whether risk stratification based on risk of sudden cardiac death alone was sufficient to predict the effectiveness and cost-effectiveness of the ICD.We developed a Markov model to evaluate the cost-effectiveness of ICD implantation compared with empiric amiodarone treatment. The model incorporated mortality rates from sudden and nonsudden cardiac death, noncardiac death and costs for each treatment strategy. We based our model inputs on data from randomized clinical trials, registries, and meta-analyses. We assumed that the ICD reduced total mortality rates by 25%, relative to use of amiodarone.The relationship between cost-effectiveness of the ICD and the total annual cardiac mortality rate is U-shaped; cost-effectiveness becomes unfavorable at both low and high total cardiac mortality rates. If the annual total cardiac mortality rate is 12%, the cost-effectiveness of the ICD varies from $36,000 per quality-adjusted life-year (QALY) gained when the ratio of sudden cardiac death to nonsudden cardiac death is 4 to $116,000 per QALY gained when the ratio is 0.25.The cost-effectiveness of ICD use relative to amiodarone depends on total cardiac mortality rates as well as the ratio of sudden to nonsudden cardiac death. Studies of candidate diagnostic tests for risk stratification should distinguish patients who die suddenly from those who die nonsuddenly, not just patients who die suddenly from those who live.
View details for DOI 10.1067/mhj.2002.125501
View details for Web of Science ID 000178086800011
View details for PubMedID 12228780
Risk of sudden versus nonsudden cardiac death in patients with coronary artery disease
AMERICAN HEART JOURNAL
2002; 144 (3): 390-396
Patients at high risk of sudden cardiac death, yet at low risk of nonsudden death, might be ideal candidates for antiarrhythmic drugs or devices. Most previous studies of prognostic markers for sudden cardiac death have ignored the competitive risk of nonsudden cardiac death. The goal of the present study was to evaluate the ability of clinical factors to distinguish the risks of sudden and nonsudden cardiac death.We identified all deaths during a 3.3-year follow-up of 30,680 patients discharged alive after admission to the cardiac care unit of a Seattle hospital. Detailed chart reviews were conducted on 1093 subsequent out-of-hospital sudden deaths, 973 nonsudden cardiac deaths, and 442 randomly selected control patients.Patients who died in follow-up (suddenly or nonsuddenly) were significantly different for many clinical factors from control patients. In contrast, patients with sudden cardiac death were insignificantly different for most clinical characteristics from patients with nonsudden cardiac death. The mode of death was 20% to 30% less likely to be sudden in women, patients who had angioplasty or bypass surgery, and patients prescribed beta-blockers. The mode of death was 20% to 30% more likely to be sudden in patients with heart failure, frequent ventricular ectopy, or a discharge diagnosis of acute myocardial infarction. A multivariable model had only modest predictive capacity for mode of death (c-index of 0.62).Standard clinical evaluation is much better at predicting overall risk of death than at predicting the mode of death as sudden or nonsudden.
View details for DOI 10.1067/mhj.2002.125495
View details for Web of Science ID 000178086800005
View details for PubMedID 12228774
Management of ventricular arrhythmias in diverse populations in California
AMERICAN HEART JOURNAL
2002; 144 (3): 431-439
The use of coronary angiography and revascularization is lower than expected among black patients. It is uncertain whether use of other cardiac procedures also varies according to race and ethnicity and whether outcomes are affected.We analyzed discharge abstracts from all nonfederal hospitals in California of patients hospitalized for a primary diagnosis of ventricular tachycardia or ventricular fibrillation between 1992 and 1994. We compared mortality rates and use of electrophysiologic study (EPS) and implantable cardioverter-defibrillator (ICD) procedures according to the race and ethnicity of the patient.Among 8713 patients admitted with ventricular tachycardia or ventricular fibrillation, 29% (n = 2508) had a subsequent EPS procedure, and 9% (n = 818) had an ICD implanted. After controlling for potential confounding factors, we found that black patients were significantly less likely than white patients to undergo EPS (odds ratio 0.72, CI 0.56-0.92) or ICD implantation (odds ratio 0.39, CI 0.25-0.60). Blacks discharged alive from the initial hospital admission had higher mortality rates over the next year than white patients, even after controlling for multiple confounding risk factors (risk ratio 1.18, CI 1.03-1.36). The use of EPS and ICD procedures was also significantly affected by several other factors, most notably by on-site procedure availability but also by age, sex, and insurance status.In a large population of patients hospitalized for ventricular arrhythmia, blacks had significantly lower rates of utilization for EPS and ICD procedures and higher subsequent mortality rates.
View details for DOI 10.1067/mhj.2002.125500
View details for Web of Science ID 000178086800010
View details for PubMedID 12228779
Overview of randomized trials of antiarrhythmic drugs and devices for the prevention of sudden cardiac death
AMERICAN HEART JOURNAL
2002; 144 (3): 422-430
Sudden cardiac death is a prominent feature of the natural history of heart disease. The efficacy of antiarrhythmic drugs and devices in preventing sudden death and reducing total mortality is uncertain.We reviewed randomized trials and quantitative overviews of type I and type III antiarrhythmic drugs. We also reviewed the randomized trials of implantable cardioverter defibrillators and combined these outcomes in a quantitative overview.Randomized trials of type I antiarrhythmic agents used as secondary prevention after myocardial infarction show an overall 21% increase in mortality rate. Randomized trials of amiodarone suggest a 13% to 19% decrease in mortality rate, and sotalol has been effective in several small trials. Trials of pure type III agents, however, have shown no mortality benefit. An overview of implantable defibrillator trials shows a 24% reduction in mortality rate (CI 15%-33%) compared with alternative therapy, most often amiodarone.Amiodarone is effective in reducing the total mortality rate by 13% to 19%, and the implantable defibrillator reduces the mortality rate by a further 24%.
View details for DOI 10.1067/mhj.2002.125499
View details for Web of Science ID 000178086800009
View details for PubMedID 12228778
Trends in hospital treatment of ventricular arrhythmias among Medicare beneficiaries, 1985 to 1995
AMERICAN HEART JOURNAL
2002; 144 (3): 413-421
Treatment options for patients with ventricular arrhythmias have undergone major changes in the last 2 decades. Trends in use of invasive procedures, clinical outcomes, and expenditures have not been well documented.We used administrative databases of Medicare beneficiaries from 1985 to 1995 to identify patients hospitalized with ventricular arrhythmias. We created a longitudinal patient profile by linking the index admission with all earlier and subsequent admissions and with death records.Approximately 85,000 patients aged > or =65 years went to hospitals in the United States with ventricular arrhythmias each year, and about 20,000 lived to admission. From 1987 to 1995, the use of electrophysiology studies and implantable cardioverter defibrillators in patients who were hospitalized grew substantially, from 3% to 22% and from 1% to 13%, respectively. Hospital expenditures rose 8% per year, primarily because of the increased use of invasive procedures. Survival improved, particularly in the medium term, with 1-year survival rates increasing between 1987 and 1994 from 52.9% to 58.3%, or half a percentage point each year.Survival of patients who sustain a ventricular arrhythmia is poor, but improving. For patients who are admitted, more intensive treatment has been accompanied by increased hospital expenditures.
View details for DOI 10.1067/mhj.2002.125498
View details for Web of Science ID 000178086800008
View details for PubMedID 12228777
Life after a ventricular arrhythmia
AMERICAN HEART JOURNAL
2002; 144 (3): 404-412
There are few data from community-based evaluations of outcomes after a life-threatening ventricular arrhythmia (LTVA). We evaluated patients' quality of life (QOL) and medical costs after hospitalization and treatment for their first episode of an LTVA.We prospectively evaluated QOL by use of the Duke Activity Status Index (DASI), Medical Outcomes Study SF-36 mental health and vitality scales, the Cardiac Arrhythmia Suppression Trial (CAST) symptom scale, and resource use in patients discharged after a first episode of an LTVA in a managed care population of 2.4 million members.We enrolled 264 subjects with new cases of LTVA. Although functional status initially decreased compared with self-reports of pre-event functional status, both functional status and symptom levels improved significantly during the study period. These improvements were greater in patients receiving an implantable cardioverter defibrillator (ICD) than in patients receiving amiodarone. Ratings of mental health and vitality were not significantly different between the treatment groups and did not change significantly during follow-up. The total 2-year medical costs were higher for patients receiving an ICD than for patients receiving amiodarone, despite lower costs during the follow-up period for the patients receiving an ICD.New onset of an LTVA has a substantial negative initial impact on QOL. With therapy, most patients have improvements in their QOL and symptom level, possibly more so after treatment with an ICD. The costs of treating these patients are very high.
View details for DOI 10.1037/mhj.2002.125497
View details for Web of Science ID 000178086800007
View details for PubMedID 12228776
- Safe but sound - Patient safety meets evidence-based medicine JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION 2002; 288 (4): 508-513
Bioterrorism preparedness and response: use of information technologies and decision support systems.
Evidence report/technology assessment (Summary)
View details for PubMedID 12154489
Effectiveness and cost-effectiveness of implantable cardioverter defibrillators in the treatment of ventricular arrhythmias among Medicare beneficiaries
AMERICAN JOURNAL OF MEDICINE
2002; 112 (7): 519-527
The implantable cardioverter defibrillator has been assessed in randomized trials, but the generalizability of trial results to broader clinical settings is unclear. Our purpose was to evaluate the outcomes and costs of defibrillator use in an unselected population.We identified 125,892 Medicare patients who were discharged between 1987 and 1995 after hospitalization with a primary diagnosis of ventricular tachycardia or ventricular fibrillation, 7789 of whom (6.2%) received a defibrillator. We used a multivariable propensity score that included patient and hospital characteristics to match pairs of patients, in which one patient received a defibrillator and the other did not. We compared mortality and costs in these 7612 matched pairs during 8 years of follow-up.Patients who received a defibrillator were more likely to be younger, white, male, and urban dwelling, and to have ischemic heart disease, heart failure, or a history of ventricular fibrillation. In the matched-pairs analysis, those who received a defibrillator had significantly lower mortality: 11% versus 19% at 1 year (odds ratio [OR] = 0.57; 95% confidence interval [CI]: 0.51 to 0.63), 20% versus 30% at 2 years (OR = 0.66; 95% CI: 0.60 to 0.72), and 28% versus 39% at 3 years (OR = 0.70; 95% CI: 0.63 to 0.77). These patients also had lower mortality at 8 years (P = 0.0001), although this advantage over patients who received medical treatment only decreased over time. Expenditures among defibrillator recipients were consistently higher, with a cost-effectiveness ratio of $78,400 per life-year gained.The use of implantable defibrillators was associated with significantly lower mortality and higher costs, whereas the cost-effectiveness was higher than many, but not all, generally accepted therapies.
View details for Web of Science ID 000175594300001
View details for PubMedID 12015242
Evidence-based practice for mere mortals - The role of informatics and health services research
Conference on Methodological Challenges in the Production and Synthesis of Knowledge
SPRINGER. 2002: 302–8
The poor translation of evidence into practice is a well-known problem. Hopes are high that information technology can help make evidence-based practice feasible for mere mortal physicians. In this paper, we draw upon the methods and perspectives of clinical practice, medical informatics, and health services research to analyze the gap between evidence and action, and to argue that computing systems for bridging this gap should incorporate both informatics and health services research expertise. We discuss 2 illustrative systems--trial banks and a web-based system to develop and disseminate evidence-based guidelines (alchemist)--and conclude with a research and training agenda.
View details for Web of Science ID 000175116800008
View details for PubMedID 11972727
View details for PubMedCentralID PMC1495037
Surveillance systems for bioterrorism: A systematic review.
SPRINGER. 2002: 184–185
View details for Web of Science ID 000175158200733
HIM's role in monitoring patient safety.
Journal of AHIMA
2002; 73 (3): 72-74
View details for PubMedID 11905078
Potential cost-effectiveness of prophylactic use of the implantable cardioverter defibrillator or amiodarone after myocardial infarction
ANNALS OF INTERNAL MEDICINE
2001; 135 (10): 870-883
Clinical trials have shown that implantable cardioverter defibrillators (ICDs) improve survival in patients with sustained ventricular arrhythmias.To determine the efficacy necessary to make prophylactic ICD or amiodarone therapy cost-effective in patients with myocardial infarction.Markov model-based cost utility analysis.Survival, cardiac death, and inpatient costs were estimated on the basis of the Myocardial Infarction Triage and Intervention registry. Other data were derived from the literature.Patients with past myocardial infarction who did not have sustained ventricular arrhythmia.Lifetime.Societal.ICD or amiodarone compared with no treatment.Life-years, quality-adjusted life-years (QALYs), costs, number needed to treat, and incremental cost-effectiveness.Compared with no treatment, ICD use led to the greatest QALYs and the highest expenditures. Amiodarone use resulted in intermediate QALYs and costs. To obtain acceptable cost-effectiveness thresholds (=$75,000/QALY), ICDs had to reduce arrhythmic death by 50% and amiodarone had to reduce total death by 7% in patients with depressed ejection fraction.For moderate efficacies, in patients with ejection fractions less than or equal to 0.3, 0.31 to 0.4, and greater than 0.4, the cost-effectiveness of amiodarone compared with no therapy was $43,100/QALY, $66,500/QALY, and $132,500/QALY, respectively, and the cost-effectiveness of ICD compared with amiodarone was $71,800/QALY, $195,700/QALY, and $557,900/QALY, respectively.Use of ICD or amiodarone in patients with past myocardial infarction and severely depressed left ventricular function may provide substantial clinical benefit at an acceptable cost. These results highlight the importance of clinical trials of ICDs in patients with low ejection fractions who have had myocardial infarction.
View details for PubMedID 11712877
The prognostic value of troponin in patients with non-ST elevation acute coronary syndromes: A meta-analysis
JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY
2001; 38 (2): 478-485
This study was designed to compare the prognostic value of an abnormal troponin level derived from studies of patients with non-ST elevation acute coronary syndromes (ACS).Risk stratification for patients with suspected ACS is important for determining need for hospitalization and intensity of treatment.We identified clinical trials and cohort studies of consecutive patients with suspected ACS without ST-elevation from 1966 through 1999. We excluded studies limited to patients with acute myocardial infarction and studies not reporting mortality or troponin results.Seven clinical trials and 19 cohort studies reported data for 5,360 patients with a troponin T test and 6,603 with a troponin I test. Patients with positive troponin (I or T) had significantly higher mortality than those with a negative test (5.2% vs. 1.6%, odds ratio [OR] 3.1). Cohort studies demonstrated a greater difference in mortality between patients with a positive versus negative troponin I (8.4% vs. 0.7%, OR 8.5) than clinical trials (4.8% if positive, 2.1% if negative, OR 2.6, p = 0.01). Prognostic value of a positive troponin T was also slightly greater for cohort studies (11.6% mortality if positive, 1.7% if negative, OR 5.1) than for clinical trials (3.8% if positive, 1.3% if negative, OR 3.0, p = 0.2)In patients with non-ST elevation ACS, the short-term odds of death are increased three- to eightfold for patients with an abnormal troponin test. Data from clinical trials suggest a lower prognostic value for troponin than do data from cohort studies.
View details for Web of Science ID 000170205800026
View details for PubMedID 11499741
Technological change around the world: Evidence from heart attack care
2001; 20 (3): 25-42
Although technological change is a hallmark of health care worldwide, relatively little evidence exists on whether changes in health care differ across the very different health care systems of developed countries. We present new comparative evidence on heart attack care in seventeen countries showing that technological change--changes in medical treatments that affect the quality and cost of care--is universal but has differed greatly around the world. Differences in treatment rates are greatest for costly medical technologies, where strict financing limits and other policies to restrict adoption of intensive technologies have been associated with divergences in medical practices over time. Countries appear to differ systematically in the time at which intensive cardiac procedures began to be widely used and in the rate of growth of the procedures. The differences appear to be related to economic and regulatory incentives of the health care systems and may have important economic and health consequences.
View details for Web of Science ID 000168576800005
View details for PubMedID 11585174
Development and validation of the Ontario acute myocardial infarction mortality prediction rules
JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY
2001; 37 (4): 992-997
To develop and validate simple statistical models that can be used with hospital discharge administrative databases to predict 30-day and one-year mortality after an acute myocardial infarction (AMI).There is increasing interest in developing AMI "report cards" using population-based hospital discharge databases. However, there is a lack of simple statistical models that can be used to adjust for regional and interinstitutional differences in patient case-mix.We used linked administrative databases on 52,616 patients having an AMI in Ontario, Canada, between 1994 and 1997 to develop logistic regression statistical models to predict 30-day and one-year mortality after an AMI. These models were subsequently validated in two external cohorts of AMI patients derived from administrative datasets from Manitoba, Canada, and California, U.S.The 11-variable Ontario AMI mortality prediction rules accurately predicted mortality with an area under the receiver operating characteristic (ROC) curve of 0.78 for 30-day mortality and 0.79 for one-year mortality in the Ontario dataset from which they were derived. In an independent validation dataset of 4,836 AMI patients from Manitoba, the ROC areas were 0.77 and 0.78, respectively. In a second validation dataset of 112,234 AMI patients from California, the ROC areas were 0.77 and 0.78 respectively.The Ontario AMI mortality prediction rules predict quite accurately 30-day and one-year mortality after an AMI in linked hospital discharge databases of AMI patients from Ontario, Manitoba and California. These models may also be useful to outcomes and quality measurement researchers in other jurisdictions.
View details for Web of Science ID 000167515700003
View details for PubMedID 11263626
- Cost reduction and implantable cardioverter defibrillator implantation JOURNAL OF CARDIOVASCULAR ELECTROPHYSIOLOGY 2001; 12 (2): 167-168
Cost-effectiveness of radiofrequency ablation for supraventricular tachycardia
ANNALS OF INTERNAL MEDICINE
2000; 133 (11): 864-876
Radiofrequency ablation is an established but expensive treatment option for many forms of supraventricular tachycardia. Most cases of supraventricular tachycardia are not life-threatening; the goal of therapy is therefore to improve the patient's quality of life.To compare the cost-effectiveness of radiofrequency ablation with that of medical management of supraventricular tachycardia.Markov model.Costs were estimated from a major academic hospital and the literature, and treatment efficacy was estimated from reports from clinical studies at major medical centers. Probabilities of clinical outcomes were estimated from the literature. To account for the effect of radiofrequency ablation on quality of life, assessments by patients who had undergone the procedure were used.Cohort of symptomatic patients who experienced 4.6 unscheduled visits per year to an emergency department or a physician's office while receiving long-term drug therapy for supraventricular tachycardia.Patient lifetime.Societal.Initial radiofrequency ablation, long-term antiarrhythmic drug therapy, and treatment of acute episodes of arrhythmia with antiarrhythmic drugs.Costs, quality-adjusted life-years, life-years, and marginal cost-effectiveness ratios.Among patients who have monthly episodes of supraventricular tachycardia, radiofrequency ablation was the most effective and least expensive therapy and therefore dominated the drug therapy options. Radiofrequency ablation improved quality-adjusted life expectancy by 3.10 quality-adjusted life-years and reduced lifetime medical expenditures by $27 900 compared with long-term drug therapy. Long-term drug therapy was more effective and had lower costs than episodic drug therapy.The findings were highly robust over substantial variations in assumptions about the efficacy and complication rate of radiofrequency ablation, including analyses in which the complication rate was tripled and efficacy was decreased substantially.Radiofrequency ablation substantially improves quality of life and reduces costs when it is used to treat highly symptomatic patients. Although the benefit of radiofrequency ablation has not been studied in less symptomatic patients, a small improvement in quality of life is sufficient to give preference to radiofrequency ablation over drug therapy.
View details for PubMedID 11103056
Prediction of risk for patients with unstable angina.
Evidence report/technology assessment (Summary)
View details for PubMedID 11013605
Should survivors of myocardial infarction be screened for risk of sudden death? A cost-effectiveness analysis
ELSEVIER SCIENCE INC. 2000: 550A–551A
View details for Web of Science ID 000085209702088
Clustering and the design of preference-assessment surveys in healthcare
HEALTH SERVICES RESEARCH
1999; 34 (5): 1033-1045
To show cluster analysis as a potentially useful tool in defining common outcomes empirically and in facilitating the assessment of preferences for health states.A survey of 224 patients with ventricular arrhythmias treated at Kaiser Permanente of Northern California.Physical functioning was measured using the Duke Activity Status Index (DASI), and mental status and vitality using the Medical Outcomes Study Short Form-36 items (SF-36). A "k-means" clustering algorithm was used to identify prototypical health states, in which patients in the same cluster shared similar responses to items in the survey.The clustering algorithm yielded four prototypical health states. Cluster 1 (21 percent of patients) was characterized by high scores on physical functioning, vitality, and mental health. Cluster 2 (33 percent of patients) had low physical function but high scores on vitality and mental health. Cluster 3 (29 percent of patients) had low physical function and low vitality but preserved mental health. Cluster 4 (17 percent of patients) had low scores on all scales. These clusters served as the basis of written descriptions of the health states.Employing a clustering algorithm to analyze health status survey data enables researchers to gain a data-driven, concise summary of the experiences of patients.
View details for Web of Science ID 000084014800006
View details for PubMedID 10591271
View details for PubMedCentralID PMC1089071
An evaluation of beta-blockers, calcium antagonists, nitrates, and alternative therapies for stable angina.
Evidence report/technology assessment (Summary)
View details for PubMedID 11925969
Quality of life before and after radiofrequency catheter ablation in patients with drug refractory atrioventricular nodal reentrant tachycardia
AMERICAN JOURNAL OF CARDIOLOGY
1999; 84 (4): 471-?
In a retrospective survey of 161 highly symptomatic patients, we found significant improvements in symptoms, patient utility, and use of medical care services after radiofrequency ablation for atrioventricular nodal reentrant tachycardia.
View details for Web of Science ID 000081987400021
View details for PubMedID 10468092
Meta-analysis of trials comparing beta-blockers, calcium antagonists, and nitrates for stable angina
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
1999; 281 (20): 1927-1936
Which drug is most effective as a first-line treatment for stable angina is not known.To compare the relative efficacy and tolerability of treatment with beta-blockers, calcium antagonists, and long-acting nitrates for patients who have stable angina.We identified English-language studies published between 1966 and 1997 by searching the MEDLINE and EMBASE databases and reviewing the bibliographies of identified articles to locate additional relevant studies.Randomized or crossover studies comparing antianginal drugs from 2 or 3 different classes (beta-blockers, calcium antagonists, and long-acting nitrates) lasting at least 1 week were reviewed. Studies were selected if they reported at least 1 of the following outcomes: cardiac death, myocardial infarction, study withdrawal due to adverse events, angina frequency, nitroglycerin use, or exercise duration. Ninety (63%) of 143 identified studies met the inclusion criteria.Two independent reviewers extracted data from selected articles, settling any differences by consensus. Outcome data were extracted a third time by 1 of the investigators. We combined results using odds ratios (ORs) for discrete data and mean differences for continuous data. Studies of calcium antagonists were grouped by duration and type of drug (nifedipine vs nonnifedipine).Rates of cardiac death and myocardial infarction were not significantly different for treatment with beta-blockers vs calcium antagonists (OR, 0.97; 95% confidence interval [CI], 0.67-1.38; P = .79). There were 0.31 (95% CI, 0.00-0.62; P = .05) fewer episodes of angina per week with beta-blockers than with calcium antagonists. beta-Blockers were discontinued because of adverse events less often than were calcium antagonists (OR, 0.72; 95% CI, 0.60-0.86; P<.001). The differences between beta-blockers and calcium antagonists were most striking for nifedipine (OR for adverse events with beta-blockers vs nifedipine, 0.60; 95% CI, 0.47-0.77). Too few trials compared nitrates with calcium antagonists or beta-blockers to draw firm conclusions about relative efficacy.beta-Blockers provide similar clinical outcomes and are associated with fewer adverse events than calcium antagonists in randomized trials of patients who have stable angina.
View details for Web of Science ID 000080427300033
View details for PubMedID 10349897
- A global analysis of technological change in health care: The case of heart attacks HEALTH AFFAIRS 1999; 18 (3): 250-255
Cost effectiveness of radiofrequency ablation for treatment of paroxysmal supraventricular tachycardias.
SAGE PUBLICATIONS INC. 1998: 458–58
View details for Web of Science ID 000076422700033
Estimating the proportion of post-myocardial infarction patients who may benefit from prophylactic implantable defibrillator placement from analysis of the CAST Registry
AMERICAN JOURNAL OF CARDIOLOGY
1998; 82 (5): 683-?
We defined the proportion of post-myocardial infarction patients who would have been eligible for the Multicenter Automatic Defibrillator Implantation Trial (MADIT) from a population of 94,797 patients with myocardial infarction entered into the Cardiac Arrhythmia Suppression Trial Registry. From this large population, only between 0.3% to 1.7% would have met strict eligibility criteria for MADIT.
View details for Web of Science ID 000075616100028
View details for PubMedID 9732904
Design of a modular, extensible decision support system for arrhythmia therapy
JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION
We developed a decision-support system for evaluation of treatment alternatives for supraventricular and ventricular arrhythmias. The system uses independent decision models that evaluate the costs and benefits of treatment for recurrent atrioventricular-node reentrant tachycardia (AVNRT), and of therapies to prevent sudden cardiac death (SCD) in patients at risk for life-threatening ventricular arrhythmias. Each of the decision models is accessible through a web-based interface that enables remote users to browse the model's underlying evidence and to perform analyses of effectiveness, cost effectiveness, and sensitivity to input variables. Because the web-based interface is independent of the models, we can extend the functionality of the system by adding decision models. This system illustrates that the use of a library of web-accessible decision models provides decision support economically to widely dispersed users.
View details for PubMedID 9929308
Use and accuracy of state death certificates for classification of sudden cardiac deaths in high-risk populations
AMERICAN HEART JOURNAL
1997; 134 (6): 1129-1132
In a large cohort of patients with known or suspected coronary disease, we evaluated the characteristics of 407 patients who died after hospital discharge and tested whether the state death certificate can be used to classify deaths as sudden cardiac versus nonsudden. Compared with a paramedic classification system based on heart rhythm, the death certificate-based classification resulted in a sensitivity that ranged from 78% to 85% and a specificity that ranged from 25% to 58%. We conclude that the death certificate can be used to identify cases of sudden cardiac death in patients at high risk; however, there is a substantial rate of false-positive sudden death classification.
View details for Web of Science ID 000071254500020
View details for PubMedID 9424075
Quantitative overview of randomized trials of amiodarone to prevent sudden cardiac death
1997; 96 (9): 2823-2829
Some randomized clinical trials of amiodarone therapy to prevent sudden cardiac death have had positive results and others have had negative results, but all were relatively small. This meta-analysis aimed to pool all trials to assess the effect of amiodarone on mortality and the impact of differences in patient population and study design on trial outcomes.Fifteen randomized trials were identified, and outcome measures were combined by use of a random effects model. The effect of patient population and study design on total mortality was assessed by use of a hierarchical Bayes model. Amiodarone reduced total mortality by 19% (confidence limits, 6% to 31%; P<.01), with somewhat greater reductions in cardiac mortality (23%, P<.001) and sudden death (30%, P<.001). Mortality reductions were similar in trials enrolling patients after myocardial infarction (21%), with left ventricular dysfunction (22%), and after cardiac arrest (25%). There was a trend toward greater risk reduction in trials requiring evidence of ventricular ectopy (25%) than in the remaining trials (10%). The trials using placebo controls had considerably less risk reduction (10%) than trials with active controls (27%) or usual care controls (42%, posterior odds <0.02).Amiodarone reduced total mortality by 10% to 19% in patients at risk of sudden cardiac death. Amiodarone reduced risk similarly in patients after myocardial infarction, with heart failure, or with clinically evident arrhythmia. The apparent inconsistencies among results of randomized trials appear to be due to small sample sizes and the type of control group used, not the type of patient enrolled.
View details for Web of Science ID A1997YF29500016
View details for PubMedID 9386144
Cost-effectiveness of implantable cardioverter defibrillators relative to amiodarone for prevention of sudden cardiac death
ANNALS OF INTERNAL MEDICINE
1997; 126 (1): 1-12
Implantable cardioverter defibrillators (ICDs) are remarkably effective in terminating ventricular arrhythmias, but they are expensive and the extent to which they extend life is unknown. The marginal cost-effectiveness of ICDs relative to amiodarone has not been clearly established.To compare the cost-effectiveness of a third-generation implantable ICD with that of empirical amiodarone treatment for preventing sudden cardiac death in patients at high or intermediate risk.A Markov model was used to evaluate health and economic outcomes of patients who received an ICD, amiodarone, or a sequential regimen that reserved ICD for patients who had an arrhythmia during amiodarone treatment.Life-years gained, quality-adjusted life-years gained, costs, and marginal cost-effectiveness.For the base-case analysis, it was assumed that treatment with an ICD would reduce the total mortality rate by 20% to 40% at 1 year compared with amiodarone and that the ICD generator would be replaced every 4 years. In high-risk patients, if an ICD reduces total mortality by 20%, patients who receive an ICD live for 4.18 quality-adjusted life-years and have a lifetime expenditure of $88,400. Patients receiving amiodarone live for 3.68 quality-adjusted life-years and have a lifetime expenditure of $51,000. Marginal cost-effectiveness of an ICD relative to amiodarone is $74,400 per quality-adjusted life-year saved. If an ICD reduces mortality by 40%, the cost-effectiveness of ICD use is $37,300 per quality-adjusted life-year saved. Both choice of therapy (an ICD or amiodarone) and the cost-effectiveness ratio are sensitive to assumptions about quality of life.Use of an ICD will cost more than $50,000 per quality-adjusted life-year gained unless it reduces all-cause mortality by 30% or more relative to amiodarone. Current evidence does not definitively support or exclude a benefit of this magnitude, but ongoing randomized trials have sufficient statistical power to do so.
View details for Web of Science ID A1997WA16500001
View details for PubMedID 8992917
Presentation and explanation of medical decision models using the World Wide Web.
Proceedings : a conference of the American Medical Informatics Association / ... AMIA Annual Fall Symposium. AMIA Fall Symposium
We demonstrated the use of the World Wide Web for the presentation and explanation of a medical decision model. We put on the web a treatment model developed as part of the Cardiac Arrhythmia and Risk of Death Patient Outcomes Research Team (CARD PORT). To demonstrate the advantages of our web-based presentation, we critiqued both the conventional paper-based and the web-based formats of this decision-model presentation with reference to an accepted published guide to understanding clinical decision models. A web-based presentation provides a useful supplement to paper-based publications by allowing authors to present their model in greater detail, to link model inputs to the primary evidence, and to disseminate the model to peer investigators for critique and collaborative modeling.
View details for PubMedID 8947628
A METAANALYSIS AT RANDOMIZED TRIALS COMPARING CORONARY-ARTERY BYPASS-GRAFTING WITH PERCUTANEOUS TRANSLUMINAL CORONARY ANGIOPLASTY IN MULTIVESSEL CORONARY-ARTERY DISEASE
AMERICAN JOURNAL OF CARDIOLOGY
1995; 76 (14): 1025-1029
We performed a meta-analysis of randomized trials that compared percutaneous transluminal coronary angioplasty (PTCA) with coronary artery bypass graft (CABG) surgery in patients with multivessel coronary artery disease. The outcomes of death, combined death, and nonfatal myocardial infarction (MI), repeat revascularization, and freedom from angina were analyzed. The overall risk of death and nonfatal MI was not different over a follow-up of 1 to 3 years (CABG:PTCA odds ratio [OR] 1.03, 95% confidence interval 0.81 to 1.32, p = 0.81). Patients randomized to CABG tended to have a higher risk of death or MI in the early, periprocedural period (OR 1.33, p = 0.091), but a lower risk in subsequent follow-up (OR 0.74, p = 0.093). CABG patients were much less likely to undergo another revascularization procedure (p < 0.00001), and were more likely to be angina free (OR 1.57, p < 0.00001). Thus, CABG and PTCA patients have similar overall risks of death and nonfatal MI at 1 to 3 years of follow-up, but relative risk differences in mortality of up to 25% cannot be excluded. CABG patients have significantly less angina and less repeat revascularization than PTCA patients.
View details for Web of Science ID A1995TE77800008
View details for PubMedID 7484855
RELATIVE RISKS OF BYPASS-SURGERY AND CORONARY ANGIOPLASTY FOR MULTIVESSEL CORONARY-ARTERY DISEASE - A METAANALYSIS
LIPPINCOTT WILLIAMS & WILKINS. 1995: 41–41
View details for Web of Science ID A1995TB48000038