Current Role at Stanford
Education & Certifications
MPH, Boston University, Epidemiology/ Biostatistics (2000)
Ph.D, Boston University, Biostatistics (2009)
Mathematical Statistician, NIH/ NCI/ DCCPS/ SRP (6/2011 - 7/2015)
Program Director and Biostatistician for the SEER program.
Rockville, MD USA
Post Doctoral Research Fellow, Harvard University Medical School (2010 - 2011)
Awarded T32 to study competing risk in Total Hip Replacement
Biostatistician/ Programmer, Brigham & Women's Hospital (2005 - 2010)
Programmer and Biostatistician for Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women's Hospital
Professional Affiliations and Activities
member, American Statistical Association (1998 - Present)
Racial Disparities in Pediatric Kidney Transplantation under the New Kidney Allocation System in the United States.
Clinical journal of the American Society of Nephrology : CJASN
Background and Objectives: In December 2014, the Kidney Allocation System (KAS) was implemented to improve equity in access to transplantation, but preliminary studies in children show mixed results. Thus, we aimed to assess how the 2014 KAS policy change affected racial/ethnic disparities in pediatric kidney transplantation access and related outcomes. Design, setting, participants, and measurements: A retrospective cohort study of children <18 years of age active on the kidney transplant list from 2008 to 2019 using the Scientific Registry of Transplant Recipients. Log-logistic accelerated failure time models were used to determine time from first activation on the transplant list and time on dialysis to deceased-donor transplant, each with KAS era or race/ethnicity as the exposure of interest. We used logistic regression to assess odds of delayed graft function. Log-rank tests assessed time to graft loss within racial/ethnic groups across KAS eras. Results: All children experienced longer wait times from activation to transplantation post-KAS. In univariable analysis, Black or Hispanic children or other children of color experienced longer times from activation to transplant compared to White children in the both eras; this finding was largely attenuated after multivariable analysis (time ratio 1.16, (95% CI 1.01-1.32); 1.13 (1.00-1.28); 1.17 (0.96-1.41) post-KAS, respectively). Multivariable analysis also showed that racial/ethnic disparities in time from dialysis initiation to transplantation in the pre-KAS era was mitigated in the post-KAS era. There were no disparities in odds of delayed graft function. Black or Hispanic children experienced longer times with a functioning graft in the post-KAS era. Conclusions: No racial/ethnic disparities from activation to deceased donor transplantation were seen before or after implementation of KAS in multivariable analysis, while time on dialysis to transplantation and odds of short-term graft loss improved in equity after KAS, without compromising disparities in delayed graft function.
View details for DOI 10.2215/CJN.06740521
View details for PubMedID 34670797
- Intensive Blood Pressure Control and Diabetes Mellitus-Related Limb Events in Patients With Type 2 Diabetes Mellitus: Reanalysis of ACCORD. Journal of the American Heart Association 2021: e021407
Karnofsky Performance Score-Failure to Thrive as a Frailty Proxy?
2021; 7 (7): e708
Among patients listed for kidney transplantation, the Karnofsky Performance Status (KPS) Scale has been used as a proxy for frailty and proposed as a predictor of long-term posttransplant outcomes. The KPS is required by the Organ Procurement and Transplantation Network for all transplants; however, the interrater reliability of KPS reporting in kidney transplant candidates has not been well investigated, and there is concern regarding limitations of using KPS that may influence transplant eligibility.Methods: We performed an observational study using existing Scientific Registry of Transplant Recipients data from 2006 to 2020 to examine the variability, reliability, and trends in the KPS among patients on the kidney transplant waitlist.Results: Our analysis included 8197 kidney transplant candidates with >1 KPS in a 3-mo period. We observed 2-7 scores per patient with an average score of 78.9 (SD = 12, 95% confidence interval, 78.8-79.1). We found substantial variability in KPS reporting, in which 27% of the patients had scores that varied widely with 20-80 points in difference. Interrater reliability in the 10-point scale was poor (30%). When using a condensed 4-category scale (disabled, requires assistance, capable of self-care, normal activity), 38% of patients experienced at least a 1-category shift in their score.Conclusions: The lack of reliability in KPS reporting raises concerns when applying the KPS as a proxy for frailty and a metric to be considered when evaluating candidacy for kidney transplantation.
View details for DOI 10.1097/TXD.0000000000001164
View details for PubMedID 34124344
Performance versus Risk Factor-Based Approaches to Coronary Artery Disease Screening in Waitlisted Kidney Transplant Candidates.
INTRODUCTION: Current screening algorithms for coronary artery disease (CAD) before kidney transplantation result in many tests but few interventions.OBJECTIVE: The aim of this study was to study the utility of 6-minute walk test (6MWT), an office-based test of cardiorespiratory fitness, for risk stratification in this setting.METHODS: We enrolled 360 patients who are near the top of the kidney transplant waitlist at our institution. All patients underwent CAD evaluation irrespective of 6MWT results. We examined the association between 6MWT and time to CAD-related events (defined as cardiac death, revascularization, nonfatal myocardial infarction, and removal from the waitlist for CAD), treating noncardiac death and waitlist removal for non-CAD reasons as competing events.RESULTS: The 6MWT-based approach designated approximately 45% of patients as "low risk," whereas a risk factor- or symptom-based approach designated 14 and 81% of patients as "low risk," respectively. The 6MWT-based approach was not significantly associated with CAD-related events within 1 year (subproportional hazard ratio [sHR] 1.00 [0.90-1.11] per 50 m) but was significantly associated with competing events (sHR 0.70 [0.66-0.75] per 50 m). In a companion analysis, removing waitlist status from consideration, 6MWT result was associated with the development of CAD-related events (sHR 0.92 [0.84-1.00] per 50 m).CONCLUSIONS: The 6MWT designates fewer patients as high risk and in need of further testing (compared to risk factor-based approaches), but its utility as a pure CAD risk stratification tool is modulated by the background waitlist removal rate. CAD screening before kidney transplant should be tailored according to a patient's actual chance of receiving a transplant.
View details for DOI 10.1159/000516158
View details for PubMedID 34034263
And Then There Were Three: Effects of Pretransplant Dialysis on Multiorgan Transplantation.
2021; 7 (2): e657
Background: Simultaneous liver-kidney (SLK) and simultaneous heart-kidney (SHK) transplantation currently utilize 6% of deceased donor kidneys in the United States. To what extent residual kidney function accounts for apparent kidney allograft survival is unknown.Methods: We examined all adult SLK and SHK transplants in the United States during 1995-2014. We considered the duration of dialysis preceding SLK or SHK (≥90 d, 1-89 d, or none) as a proxy of residual kidney function. We used multinomial logistic regression to estimate the difference in the adjusted likelihood of 6- and 12-month apparent kidney allograft failure between the no dialysis versus ≥90 days dialysis groups.Results: Of 4875 SLK and 848 SHK recipients, 1775 (36%) SLK and 449 (53%) SHK recipients received no dialysis before transplant. The likelihood of apparent kidney allograft failure was 1%-3% lower at 12 months in SLK and SHK recipients who did not require pretransplant dialysis relative to recipients who required ≥90 days of pretransplant dialysis. Among 3978 SLK recipients who survived to 1 year, no pretransplant dialysis was associated with a lower risk of apparent kidney allograft failure over a median follow-up of 5.7 years (adjusted hazard ratio 0.73 [0.55-0.96]).Conclusions: Patients with residual kidney function at the time of multiorgan transplantation are less likely to have apparent failure of the kidney allograft. Whether residual kidney function facilitates function of the allograft or whether some SLK and SHK recipients have 3 functional kidneys is unknown. Sustained kidney function after SLK and SHK transplants does not necessarily indicate successful MOT.
View details for DOI 10.1097/TXD.0000000000001112
View details for PubMedID 33490382
- Documentation of Reproductive Health Counseling Among Women With CKD: A Retrospective Chart Review. American journal of kidney diseases : the official journal of the National Kidney Foundation 2021
Factors Associated With Failure to Achieve the Intensive Blood Pressure Target in the Systolic Blood Pressure Intervention Trial (SPRINT).
Hypertension (Dallas, Tex. : 1979)
SPRINT (Systolic Blood Pressure Intervention Trial) found that randomization of nondiabetic participants at high cardiovascular risk to an intensive (systolic blood pressure [SBP] <120 mm Hg) versus standard (SBP <140 mm Hg) target resulted in 25% risk reduction in the first cardiovascular composite event (ie, cardiovascular death or nonfatal myocardial infarction, stroke, or hospitalization for heart failure) and a 27% risk reduction in all-cause mortality. In this post hoc analysis, we sought to determine the factors associated with failure to achieve the SBP target in 4678 SPRINT participants randomized to the intensive treatment group. Using a generalized estimating equation model, we assessed variables associated with failure to achieve the intensive SBP target as a repeated outcome collected during serial follow-up visits, including the occurrence of serious adverse events. In the multivariable model adjusted for baseline demographic, clinical, and laboratory variables, older age, higher SBP, underlying chronic kidney disease, higher number of antihypertensives, and moderate cognitive impairment at screening were associated with failure to achieve the intensive SBP target. Occurrence of a serious adverse event during the trial was associated with 20% higher odds of failure to achieve the SBP target. Participants of Hispanic ethnicity had 47% lower odds of failure to achieve the intensive SBP target relative to non-Hispanic Whites. Understanding barriers to achieving intensive SBP targets should allow clinicians to optimize management of hypertension in patients at high risk for cardiovascular disease.
View details for DOI 10.1161/HYPERTENSIONAHA.120.16155
View details for PubMedID 33131314
Screening Rates for Primary Aldosteronism in Resistant Hypertension: A Cohort Study.
Hypertension (Dallas, Tex. : 1979)
Resistant hypertension is associated with higher rates of cardiovascular disease, kidney disease, and death than primary hypertension. Although clinical practice guidelines recommend screening for primary aldosteronism among persons with resistant hypertension, rates of screening are unknown. We identified 145 670 persons with hypertension and excluded persons with congestive heart failure or advanced chronic kidney disease. Among this cohort, we studied 4660 persons ages 18 to <90 from the years 2008 to 2014 with resistant hypertension and available laboratory tests within the following 24 months. The screening rate for primary aldosteronism in persons with resistant hypertension was 2.1%. Screened persons were younger (55.9±13.3 versus 65.5±11.6 years; P<0.0001) and had higher systolic (145.1±24.3 versus 139.6±20.5 mm Hg; P=0.04) and diastolic blood pressure (81.8±13.6 versus 74.4±13.8 mm Hg; P<0.0001), lower rates of coronary artery disease (5.2% versus 14.2%; P=0.01), and lower serum potassium concentrations (3.9±0.6 versus 4.1±0.5 mmol/L; P=0.04) than unscreened persons. Screened persons had significantly higher rates of prescription for calcium channel blockers, mixed alpha/beta-adrenergic receptor antagonists, sympatholytics, and vasodilators, and lower rates of prescription for loop, thiazide, and thiazide-type diuretics. The prescription of mineralocorticoid receptor antagonists or other potassium-sparing diuretics was not significantly different between groups (P=0.20). In conclusion, only 2.1% of eligible persons received a screening test within 2 years of meeting criteria for resistant hypertension. Low rates of screening were not due to the prescription of antihypertensive medications that may potentially interfere with interpretation of the screening test. Efforts to highlight guideline-recommended screening and targeted therapy are warranted.
View details for DOI 10.1161/HYPERTENSIONAHA.119.14359
View details for PubMedID 32008436
Physical Performance Testing in Kidney Transplant Candidates at the Top of the Waitlist.
American journal of kidney diseases : the official journal of the National Kidney Foundation
Frailty and poor physical function are associated with adverse kidney transplant outcomes, but how to incorporate this knowledge into clinical practice is uncertain. We studied the association between measured physical performance and clinical outcomes among patients on kidney transplant waitlists.Prospective observational cohort study.We studied consecutive patients evaluated in our Transplant Readiness Assessment Clinic, a top-of-the-waitlist management program, from May 2017 through December 2018 (N=305). We incorporated physical performance testing, including the 6-minute walk test (6MWT) and the sit-to-stand (STS) test, into routine clinical assessments.6MWT and STS test results.Primary - Time to adverse waitlist outcomes (removal from waitlist or death). Secondary - Time to transplantation, time to death.We used linear regression to examine the relationship between clinical characteristics and physical performance test results. We used subdistribution hazards models to examine the association between physical performance test results and outcomes.Median 6MWT and STS results were 393 meters (25th- 75th percentile range 305-455) and 17 repetitions (25th- 75th percentile range 12-21), respectively. Clinical characteristics and Estimated Post-Transplant Survival scores only accounted for 14-21% of the variance in 6MWT/STS results. 6MWT/STS results were associated with adverse waitlist outcomes (adjusted subdistribution hazard ratio [sHR] of 1.42 [95% confidence interval 1.30-1.56 per 50-meter lower in 6MWT and 1.53 [95% confidence interval 1.33-1.75] per 5-repetition lower in STS), and with transplantation (adjusted sHR of 0.80 [95% confidence interval 0.72-0.88] per 50-meter lower in 6MWT and 0.80 [95% confidence interval 0.71-0.89] per 5-repetition lower in STS). Addition of either STS or 6MWT to survival models containing clinical characteristics enhanced fit (likelihood ratio test p<0.001).Single-center observational study. Other measures of global health status (e.g., Fried frailty index or short physical performance battery) were not examined.Among waitlisted kidney transplant candidates with high Kidney Allocation Scores, standardized and easily performed physical performance test results are associated with waitlist outcomes and contain information beyond what is currently routinely collected in clinical practice.
View details for DOI 10.1053/j.ajkd.2020.04.009
View details for PubMedID 32512039
Toward telemedicine-compatible physical functioning assessments in kidney transplant candidates.
Frailty is associated with adverse kidney transplant outcomes and can be assessed by subjective and objective metrics. There is increasing recognition of the value of metrics obtainable remotely. We compared the self-reported SF-36 physical functioning subscale score (SF-36 PF) with in-person physical performance tests (6-minute walk and sit-to-stand) in a prospective cohort of kidney transplant candidates. We assessed each metric's ability to predict time to the composite outcome of waitlist removal or death, censoring at transplant. We built time-dependent receiver operating characteristic curves and calculated the area under the curve [AUC(t)] at 1 year, using bootstrapping for internal validation. In 199 patients followed for a median of 346 days, 41 reached the composite endpoint. Lower SF-36 PF scores were associated with higher risk of waitlist removal/death, with every 10-point decrease corresponding to a 16% increase in risk. All models showed an AUC(t) of 0.83-0.84 that did not contract substantially after internal validation. Among kidney transplant candidates, SF-36 PF, obtainable remotely, can help to stratify the risk of waitlist removal or death, and may be used as a screening tool for poor physical functioning in ongoing candidate evaluation, particularly where travel, increasing patient volume, or other restrictions challenge in-person assessment.
View details for DOI 10.1111/ctr.14173
View details for PubMedID 33247983
Impact of Pre-Transplant Donor BK Viruria in Kidney Transplant Recipients.
The Journal of infectious diseases
BACKGROUND: BK virus (BKV) is a significant cause of nephropathy in kidney transplantation. The goal of this study was to characterize the course and source of BKV in kidney transplant recipients.METHODS: We prospectively collected pre-transplant plasma and urine samples from living and deceased kidney donors and performed BKV PCR and IgG testing on pre-transplant and serially collected post-transplant samples in kidney transplant recipients.RESULTS: Among deceased donors, 8.1%(17/208) had detectable BKV DNA in urine prior to organ procurement. BK viruria was observed in 15.4%(6/39) of living donors and 8.5%(4/47) of deceased donors of recipients at our institution (p=0.50). BKV VP1 sequencing revealed identical virus between donor-recipient pairs to suggest donor transmission of virus. Recipients of BK viruric donors were more likely to develop BK viruria (66.6%vs.7.8%, p<0.001) and viremia (66.6%vs.8.9%, p<0.001) with a shorter time to onset (log-rank, p<0.001). Though donor BKV IgG titers were higher in recipients who developed BK viremia, pre-transplant donor, recipient, and combined donor/recipient serology status was not associated with BK viremia (p=0.31,0.75,0.51,respectively).DISCUSSION: Donor BK viruria is associated with early BK viruria and viremia in kidney transplant recipients. BKV PCR testing of donor urine may be useful in identifying recipients at-risk for BKV complications.
View details for PubMedID 30869132
- Trimethylamine N-Oxide and Cardiovascular Outcomes in Patients with ESKD Receiving Maintenance Hemodialysis CLINICAL JOURNAL OF THE AMERICAN SOCIETY OF NEPHROLOGY 2019; 14 (2): 261–67
- Fibroblast Growth Factor 23 Genotype and Cardiovascular Disease in Patients Undergoing Hemodialysis AMERICAN JOURNAL OF NEPHROLOGY 2019; 49 (2): 125–32
Longitudinal Changes in Kidney Function Following Heart Transplantation: Stanford Experience.
Many heart transplant recipients experience declining kidney function following transplantation. We aimed toquantify change in kidney function in heart transplant recipients stratified by pre-transplant kidney function. 230 adult heart transplant recipients between May 1, 2008 and December 31, 2014 were evaluated for up to 5 years post-transplant (median 1 year). Using 19,398 total eGFR assessments, we evaluated trends in estimated glomerular filtration rate (eGFR) in recipients with normal/near normal (eGFR >45 mL/min/1.73m2 ) versus impaired (eGFR <45 mL/min/1.73m2 ) kidney function and the likelihood of reaching an eGFR of 20 mL/min/1.73m2 after heart transplant. Baseline characteristics were similar. Immediately following heart transplant, the impaired pre-transplant kidney function group showed a mean eGFR gain of 9.5mL/min/1.73m2 (n=193) versus a mean decline of 4.9 mL/min/1.73m2 (n=37) in the normal/near normal group. Subsequent rates of eGFR decline were 2.2 mL/min/1.73m2 /yrversus2.9 mL/min/1.73m2 /yr, respectively. The probability of reaching an eGFR of 20 mL/min/1.73m2 or less at 1, 5, and 10 years following heart transplant was 1%, 4% and 30% in the impaired group, and <1%, <1%, and 10% in the normal/near normal group. Estimates of expected recovery in kidney function and its decline over time will help inform decision making about kidney care after heart transplantation. This article is protected by copyright. All rights reserved.
View details for DOI 10.1111/ctr.13414
View details for PubMedID 30240515
Screening Rates for the Diagnostic Workup of Resistant Hypertension
LIPPINCOTT WILLIAMS & WILKINS. 2017
View details for Web of Science ID 000523486000196
Utility in Treating Kidney Failure in End-Stage Liver Disease With Simultaneous Liver-Kidney Transplantation
2017; 101 (5): 1111-1119
Simultaneous liver-kidney (SLK) transplantation plays an important role in treating kidney failure in patients with end-stage liver disease. It used 5% of deceased donor kidney transplanted in 2015. We evaluated the utility, defined as posttransplant kidney allograft lifespan, of this practice.Using data from the Scientific Registry of Transplant Recipients, we compared outcomes for all SLK transplants between January 1, 1995, and December 3, 2014, to their donor-matched kidney used in kidney-alone (Ki) or simultaneous pancreas kidney (SPK) transplants. Primary outcome was kidney allograft lifespan, defined as the time free from death or allograft failure. Secondary outcomes included death and death-censored allograft failure. We adjusted all analyses for donor, transplant, and recipient factors.The adjusted 10-year mean kidney allograft lifespan was higher in Ki/SPK compared with SLK transplants by 0.99 years in the Model for End-stage Liver Disease era and 1.71 years in the pre-Model for End-stage Liver Disease era. Death was higher in SLK recipients relative to Ki/SPK recipients: 10-year cumulative incidences 0.36 (95% confident interval 0.33-0.38) versus 0.19 (95% confident interval 0.17-0.21).SLK transplantation exemplifies the trade-off between the principles of utility and medical urgency. With each SLK transplantation, about 1 year of allograft lifespan is traded so that sicker patients, that is, SLK transplant recipients, are afforded access to the organ. These data provide a basis against which benefits derived from urgency-based allocation can be measured.
View details for DOI 10.1097/TP.0000000000001491
View details for PubMedID 28437790
Current estimates of the cure fraction: a feasibility study of statistical cure for breast and colorectal cancer.
Journal of the National Cancer Institute. Monographs
2014; 2014 (49): 244-254
The probability of cure is a long-term prognostic measure of cancer survival. Estimates of the cure fraction, the proportion of patients "cured" of the disease, are based on extrapolating survival models beyond the range of data. The objective of this work is to evaluate the sensitivity of cure fraction estimates to model choice and study design.Data were obtained from the Surveillance, Epidemiology, and End Results (SEER)-9 registries to construct a cohort of breast and colorectal cancer patients diagnosed from 1975 to 1985. In a sensitivity analysis, cure fraction estimates are compared from different study designs with short- and long-term follow-up. Methods tested include: cause-specific and relative survival, parametric mixture, and flexible models. In a separate analysis, estimates are projected for 2008 diagnoses using study designs including the full cohort (1975-2008 diagnoses) and restricted to recent diagnoses (1998-2008) with follow-up to 2009.We show that flexible models often provide higher estimates of the cure fraction compared to parametric mixture models. Log normal models generate lower estimates than Weibull parametric models. In general, 12 years is enough follow-up time to estimate the cure fraction for regional and distant stage colorectal cancer but not for breast cancer. 2008 colorectal cure projections show a 15% increase in the cure fraction since 1985.Estimates of the cure fraction are model and study design dependent. It is best to compare results from multiple models and examine model fit to determine the reliability of the estimate. Early-stage cancers are sensitive to survival type and follow-up time because of their longer survival. More flexible models are susceptible to slight fluctuations in the shape of the survival curve which can influence the stability of the estimate; however, stability may be improved by lengthening follow-up and restricting the cohort to reduce heterogeneity in the data.
View details for DOI 10.1093/jncimonographs/lgu015
View details for PubMedID 25417238
Adolescent and young adult cancer survival.
Journal of the National Cancer Institute. Monographs
2014; 2014 (49): 228-235
Adolescent and young adults (AYAs) face challenges in having their cancers recognized, diagnosed, treated, and monitored. Monitoring AYA cancer survival is of interest because of the lack of improvement in outcome previously documented for these patients as compared with younger and older patient outcomes. AYA patients 15-39 years old, diagnosed during 2000-2008 with malignant cancers were selected from the SEER 17 registries data. Selected cancers were analyzed for incidence and five-year relative survival by histology, stage, and receptor subtypes. Hazard ratios were estimated for cancer death risk among younger and older ages relative to the AYA group. AYA survival was worse for female breast cancer (regardless of estrogen receptor status), acute lymphoid leukemia (ALL), and acute myeloid leukemia (AML). AYA survival for AML was lowest for a subtype associated with a mutation of the nucleophosmin 1 gene (NPM1). AYA survival for breast cancer and leukemia remain poor as compared with younger and older survivors. Research is needed to address disparities and improve survival in this age group.
View details for DOI 10.1093/jncimonographs/lgu019
View details for PubMedID 25417236
View details for PubMedCentralID PMC4841167
A comparison of statistical approaches for physician-randomized trials with survival outcomes
CONTEMPORARY CLINICAL TRIALS
2012; 33 (1): 104-115
This study compares methods for analyzing correlated survival data from physician-randomized trials of health care quality improvement interventions. Several proposed methods adjust for correlated survival data; however the most suitable method is unknown. Applying the characteristics of our study example, we performed three simulation studies to compare conditional, marginal, and non-parametric methods for analyzing clustered survival data. We simulated 1000 datasets using a shared frailty model with (1) fixed cluster size, (2) variable cluster size, and (3) non-lognormal random effects. Methods of analyses included: the nonlinear mixed model (conditional), the marginal proportional hazards model with robust standard errors, the clustered logrank test, and the clustered permutation test (non-parametric). For each method considered we estimated Type I error, power, mean squared error, and the coverage probability of the treatment effect estimator. We observed underestimated Type I error for the clustered logrank test. The marginal proportional hazards method performed well even when model assumptions were violated. Nonlinear mixed models were only advantageous when the distribution was correctly specified.
View details for DOI 10.1016/j.cct.2011.08.008
View details for Web of Science ID 000300072500018
View details for PubMedID 21924382
- Meta-analyses involving cross-over trials: methodological issues INTERNATIONAL JOURNAL OF EPIDEMIOLOGY 2011; 40 (6): 1732-1734
A SAS macro for a clustered logrank test
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE
2011; 104 (2): 266-270
The clustered logrank test is a nonparametric method of significance testing for correlated survival data. Examples of its application include cluster randomized trials where groups of patients rather than individuals are randomized to either a treatment or a control intervention. We describe a SAS macro that implements the 2-sample clustered logrank test for data where the entire cluster is randomized to the same treatment group. We discuss the theory and applications behind this test as well as details of the SAS code.
View details for DOI 10.1016/j.cmpb.2011.02.001
View details for Web of Science ID 000296945100031
View details for PubMedID 21496938
Guideline-conformity of initiation with oral hypoglycemic treatment for patients with newly therapy-dependent type 2 diabetes mellitus in Austria
PHARMACOEPIDEMIOLOGY AND DRUG SAFETY
2011; 20 (1): 57-65
To determine guideline conformity of initiation of oral hypoglycemic (OH) treatment for type 2 diabetes in Austria; to study patient and prescriber correlates of recommended initiation with metformin monotherapy.We used claims from 11 sickness funds that covered 7.5 million individuals, representing >90% of the Austrian population. First-time OH use was defined as a first filled prescription after one year without any OH drug or insulin. Among these incident users, we described the OH drug class used and defined correlates of initiation with metformin monotherapy.From 1/2007 to 6/2008, we identified 42,882 incident users of an OH drug: 70.8% used metformin, 24.7% used a sulfonylurea, and 4.5% initiated treatment with another class. We estimated the incidence of OH-dependent type 2 diabetes at 3.8-4.4 per 1000 patient-years. We conducted multivariate analyses among 39 077 patients with available prescriber information. Independent correlates of initiation with metformin were younger age, female gender, waived co-payment, more recent initiation, fewer hospital days and more therapeutic classes received in the year prior to first OH therapy (all p < 0.001). Prescriber specialty and age (p < 0.001), but not gender, were also associated with metformin initiation. Approximately 20% of metformin initiators had a second OH drug added within <18 months. While we were unable to ascertain specific contraindications to metformin (renal insufficiency, hepatic failure), <10% of the general population are expected to have these conditions.Seventy per cent of new initiators of OH treatment in Austria received metformin as recommended by international guidelines. At least 20% did not, taking into account possible contraindications, which provides an opportunity for intervention.
View details for DOI 10.1002/pds.2059
View details for Web of Science ID 000286071700008
View details for PubMedID 21182153
Evaluation of Guideline-Conformity of Initiation of Oral Hypoglycemic Treatment for Incident Diabetes Mellitus Type 2 in Austria
WILEY-BLACKWELL. 2010: S54
View details for Web of Science ID 000209826200123
Primary Medication Non-Adherence: Analysis of 195,930 Electronic Prescriptions
JOURNAL OF GENERAL INTERNAL MEDICINE
2010; 25 (4): 284-290
Non-adherence to essential medications represents an important public health problem. Little is known about the frequency with which patients fail to fill prescriptions when new medications are started ("primary non-adherence") or predictors of failure to fill.Evaluate primary non-adherence in community-based practices and identify predictors of non-adherence.75,589 patients treated by 1,217 prescribers in the first year of a community-based e-prescribing initiative.We compiled all e-prescriptions written over a 12-month period and used filled claims to identify filled prescriptions. We calculated primary adherence and non-adherence rates for all e-prescriptions and for new medication starts and compared the rates across patient and medication characteristics. Using multivariable regressions analyses, we examined which characteristics were associated with non-adherence.Primary medication non-adherence.Of 195,930 e-prescriptions, 151,837 (78%) were filled. Of 82,245 e-prescriptions for new medications, 58,984 (72%) were filled. Primary adherence rates were higher for prescriptions written by primary care specialists, especially pediatricians (84%). Patients aged 18 and younger filled prescriptions at the highest rate (87%). In multivariate analyses, medication class was the strongest predictor of adherence, and non-adherence was common for newly prescribed medications treating chronic conditions such as hypertension (28.4%), hyperlipidemia (28.2%), and diabetes (31.4%).Many e-prescriptions were not filled. Previous studies of medication non-adherence failed to capture these prescriptions. Efforts to increase primary adherence could dramatically improve the effectiveness of medication therapy. Interventions that target specific medication classes may be most effective.
View details for DOI 10.1007/s11606-010-1253-9
View details for Web of Science ID 000275779300003
View details for PubMedID 20131023
A SAS macro for a clustered permutation test
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE
2009; 95 (1): 89-94
The clustered permutation test is a nonparametric method of significance testing for correlated data. It is often used in cluster randomized trials where groups of patients rather than individuals are randomized to either a treatment or control intervention. We describe a flexible and efficient SAS macro that implements the 2-sample clustered permutation test. We discuss the theory and applications behind this test as well as details of the SAS code.
View details for DOI 10.1016/j.cmpb.2009.02.005
View details for Web of Science ID 000266187900008
View details for PubMedID 19321221
The Relationship Between Focal Erosions and Generalized Osteoporosis in Postmenopausal Women With Rheumatoid Arthritis
ARTHRITIS AND RHEUMATISM
2009; 60 (6): 1624-1631
Among rheumatoid arthritis (RA) patients who have had the disease for 10 years, more than half have focal erosions, and the risk of fracture is doubled. However, there is little information about the potential relationship between focal erosions and bone mineral density (BMD). The aim of this study was to determine whether lower BMD is associated with higher erosion scores among patients with RA.We enrolled 163 postmenopausal women with RA, none of whom were taking osteoporosis medications. Patients underwent dual x-ray absorptiometry at the hip and spine and hand radiography, and completed a questionnaire. The hand radiographs were scored using the Sharp method, and the relationship between BMD and erosions was measured using Spearman's correlation coefficients and adjusted linear regression models.Patients had an average disease duration of 13.7 years, and almost all were taking a disease-modifying antirheumatic drug. Sixty-three percent were rheumatoid factor (RF) positive. The median modified Health Assessment Questionnaire score was 0.7, and the average Disease Activity Score in 28 joints was 3.8. The erosion score was significantly correlated with total hip BMD (r=-0.33, P<0.0001), but not with lumbar spine BMD (r=-0.09, P=0.27). Hip BMD was significantly lower in RF-positive patients versus RF-negative patients (P=0.02). In multivariable models that included age, body mass index, and cumulative oral glucocorticoid dose, neither total hip BMD nor lumbar spine BMD was significantly associated with focal erosions.Our results suggest that hip BMD is associated with focal erosions among postmenopausal women with RA, but that this association disappears after multivariable adjustment. While BMD and erosions may be correlated with bone manifestations of RA, their relationship is complex and influenced by other disease-related factors.
View details for DOI 10.1002/art.24551
View details for Web of Science ID 000267116800010
View details for PubMedID 19479876
An evaluation of statistical approaches for analyzing physician-randomized quality improvement interventions
CONTEMPORARY CLINICAL TRIALS
2008; 29 (5): 687-695
Health care quality improvement interventions are often evaluated in randomized trials in which individual physicians serve as the unit of randomization. These cluster randomized trials present a unique data structure that consists of many clusters of highly variable size. The appropriate method of analysis for these trials is unknown. We conducted a simulation study to compare several methods for analyzing data which were generated to replicate the structure of our motivating example. We varied the treatment effect size and the distributional assumptions about the random effect. Simulation was used to estimate power, coverage, bias, and mean squared error of full maximum likelihood estimation (MLE), approximate MLE using penalized quasi-likelihood (PQL), generalized estimating equations (GEE), group-bootstrapped logistic regression, and a clustered permutation test. Across all conditions tested, GEE and full MLE performed comparably. Bootstrapped methods were less powerful and had higher mean squared error under conditions of variable cluster size. PQL yielded biased results. The permutation test preserved Type I error rates, but had less power than the other methods considered.
View details for DOI 10.1016/j.cct.2008.04.003
View details for Web of Science ID 000259424400008
View details for PubMedID 18571476
Medical comorbidity and health-related quality of life in bipolar disorder across the adult age span
JOURNAL OF AFFECTIVE DISORDERS
2005; 86 (1): 47-60
Little is known about medical comorbidity or health-related quality of life (HRQOL) in bipolar disorder across the adult age span, especially in public sector patients.We obtained cross-sectional demographic, clinical, and functional ratings for 330 veterans hospitalized for bipolar disorder with Mini-Mental State score > or = 27 and without active alcohol/substance intoxication or withdrawal, who had had at least 2 prior psychiatric admissions in the last 5 years. Structured medical record review identified current/lifetime comorbid medical conditions. SF-36 Physical (PCS) and Mental (MCS) Component Scores, measured physical and mental HRQOL. Univariate and multivariate analyses addressed main hypotheses that physical and mental function decrease with age with decrements due to increasing medical comorbidity.PCS decreased (worsened) with age; number of current comorbid medical diagnoses, but not age, explained the decline. Older individuals had higher (better) MCS, even without controlling for medical comorbidity. Multivariate analysis indicated association of MCS with age, current depressed/mixed episode, number of past-year depressive episodes, and current anxiety disorder, but not with medical comorbidity, number of past-year manic episodes, current substance disorder or lifetime comorbidities.This cross-sectional design studied a predominantly male hospitalized sample who qualified for and consented to subsequent randomized treatment.Medical comorbidity is associated with lower (worse) physical HRQOL, independent of age. Surprisingly, younger rather than older subjects reported lower mental HRQOL. This appears due in part to more complex psychiatric presentations, and several mechanisms are discussed. Both results suggest that age-specific assessment and treatment may enhance HRQOL outcome.
View details for DOI 10.1016/j.jad.2004.12.006
View details for Web of Science ID 000228632700006
View details for PubMedID 15820270