Professional Education

  • PhD, University of Rennes, France, Clinical Epidemiology (2022)
  • MSc, University Claude Bernard, Lyon, France, Clinical Evaluation (2018)
  • PharmD, University of Regensburg, Germany, Pharmacy (2017)

Stanford Advisors

All Publications

  • Peer review before trial conduct could increase research value and reduce waste. Journal of clinical epidemiology Siebert, M., Naudet, F., Ioannidis, J. P. 2023


    Traditional peer-review of clinical trials happens too late, after the trials are already done. However, lack of methodological rigor and presence of many biases can be detected and remedied in advance. Here, we examine several options for review and improvement of trials before their conduct: protocol review by peers, sponsors, regulatory authorities, and institutional ethical committees; registration in registry sites; deposition of protocol and/or the statistical analysis plan in a public repository; peer-review and publication of the protocol and/or the statistical analysis plan in a journal; and Registered Reports. Some practices are considered standard (e.g. registration in trial registry), while others are still uncommon but are becoming more frequent (e.g. publication of full trial protocols and statistical analysis plans). Ongoing challenges hinder a large-scale implementation of some promising practices such as Registered Reports. Innovative ideas are necessary to advance peer-review efficiency and rigor in clinical trials but also to lower the cumulative burden for peer-reviewers. We make several suggestions to enhance pre-conduct peer-review. Making all steps of research process public and open may reverse siloed environments. Pre-conduct peer-review may be improved by making routinely publicly available all protocols that have gone through review by institutional review boards and regulatory agencies.

    View details for DOI 10.1016/j.jclinepi.2023.05.024

    View details for PubMedID 37286150

  • Ten simple rules for implementing open and reproducible research practices after attending a training course. PLoS computational biology Heise, V., Holman, C., Lo, H., Lyras, E. M., Adkins, M. C., Aquino, M. R., Bougioukas, K. I., Bray, K. O., Gajos, M., Guo, X., Hartling, C., Huerta-Gutierrez, R., Jindrová, M., Kenney, J. P., Kępińska, A. P., Kneller, L., Lopez-Rodriguez, E., Mühlensiepen, F., Richards, A., Richards, G., Siebert, M., Smith, J. A., Smith, N., Stransky, N., Tarvainen, S., Valdes, D. S., Warrington, K. L., Wilpert, N. M., Witkowska, D., Zaneva, M., Zanker, J., Weissgerber, T. L. 2023; 19 (1): e1010750


    Open, reproducible, and replicable research practices are a fundamental part of science. Training is often organized on a grassroots level, offered by early career researchers, for early career researchers. Buffet style courses that cover many topics can inspire participants to try new things; however, they can also be overwhelming. Participants who want to implement new practices may not know where to start once they return to their research team. We describe ten simple rules to guide participants of relevant training courses in implementing robust research practices in their own projects, once they return to their research group. This includes (1) prioritizing and planning which practices to implement, which involves obtaining support and convincing others involved in the research project of the added value of implementing new practices; (2) managing problems that arise during implementation; and (3) making reproducible research and open science practices an integral part of a future research career. We also outline strategies that course organizers can use to prepare participants for implementation and support them during this process.

    View details for DOI 10.1371/journal.pcbi.1010750

    View details for PubMedID 36602968

  • Adoption of the World Health Organization’s best practices in clinical trial registration and reporting among top public and philanthropic funders of medical research in the United States Gamertsfelder, E., Delgado Figueroa, N., Keestra, S., Silva, A., Borana, R., Siebert, M., Bruckner, T. medRxiv. 2023


    Clinical trial funders in the United States have the opportunity to promote transparency, reduce research waste, and prevent publication bias by adopting policies that require grantees to appropriately preregister trials and report their results, as well as monitor trialists’ registration and reporting compliance. This paper has three aims: a) to assess to what extent the clinical trial policies and monitoring systems of the 14 largest public and philanthropic medical research funders in the United States meet global best practice benchmarks as stipulated by the WHO Joint Statement;[1] b) to assess whether public or philanthropic funders have adopted more WHO Joint Statement elements on average; and c) to assess whether and how funders’ policies refer to CONSORT standards for clinical trial outcome reporting in academic journals. The funders were assessed using an 11-item scoring tool based on WHO Joint Statement benchmarks. These 11 items fell into four categories: trial registration, academic publication, monitoring, and sanctions. An additional item captured whether and how funders referred to CONSORT within their trial policies. Each funder was independently assessed by 2-3 researchers. Funders were contacted to flag possible errors and omissions. Ambiguous or difficult to score items were settled by an independent adjudicator. Our cross-sectional study of the 14 largest public and philanthropic funders in the US finds that on average, funders have only implemented 4.1/11 (37%) of World Health Organization best practices in clinical trial transparency. The most frequently adopted requirement was open access publishing (14/14 funders), and the least frequently adopted were (1) requiring trial ID to appear in all publications (2/14 funders, 14%) and (2) making compliance reports public (2/14 funders, 14%). Public funders, on average, adopted more policy elements (5.3/11 items, 48%) than philanthropic funders (2.8/11, 25%). Only one funder’s policy documents mentioned the CONSORT statement. There is significant variation between the number of best practice policy items adopted by medical research funders in the United States. Many funders fell significantly short of WHO Joint Statement benchmarks. Each funder could benefit from policy revision and strengthening.

  • Data-sharing and re-analysis for main studies assessed by the European Medicines Agency-a cross-sectional study on European Public Assessment Reports BMC MEDICINE Siebert, M., Gaba, J., Renault, A., Laviolle, B., Locher, C., Moher, D., Naudet, F. 2022; 20 (1): 177


    Transparency and reproducibility are expected to be normative practices in clinical trials used for decision-making on marketing authorisations for new medicines. This registered report introduces a cross-sectional study aiming to assess inferential reproducibility for main trials assessed by the European Medicines Agency.Two researchers independently identified all studies on new medicines, biosimilars and orphan medicines given approval by the European Commission between January 2017 and December 2019, categorised as 'main studies' in the European Public Assessment Reports (EPARs). Sixty-two of these studies were randomly sampled. One researcher retrieved the individual patient data (IPD) for these studies and prepared a dossier for each study, containing the IPD, the protocol and information on the conduct of the study. A second researcher who had no access to study reports used the dossier to run an independent re-analysis of each trial. All results of these re-analyses were reported in terms of each study's conclusions, p-values, effect sizes and changes from the initial protocol. A team of two researchers not involved in the re-analysis compared results of the re-analyses with published results of the trial.Two hundred ninety-two main studies in 173 EPARs were identified. Among the 62 studies randomly sampled, we received IPD for 10 trials. The median number of days between data request and data receipt was 253 [interquartile range 182-469]. For these ten trials, we identified 23 distinct primary outcomes for which the conclusions were reproduced in all re-analyses. Therefore, 10/62 trials (16% [95% confidence interval 8% to 28%]) were reproduced, as the 52 studies without available data were considered non-reproducible. There was no change from the original study protocol regarding the primary outcome in any of these ten studies. Spin was observed in the report of one study.Despite their results supporting decisions that affect millions of people's health across the European Union, most main studies used in EPARs lack transparency and their results are not reproducible for external researchers. Re-analyses of the few trials with available data showed very good inferential reproducibility.

    View details for DOI 10.1186/s12916-022-02377-2

    View details for Web of Science ID 000797969400001

    View details for PubMedID 35590360

    View details for PubMedCentralID PMC9119701

  • Medical journal requirements for clinical trial data sharing: Ripe for improvement. PLoS medicine Naudet, F., Siebert, M., Pellen, C., Gaba, J., Axfors, C., Cristea, I., Danchev, V., Mansmann, U., Ohmann, C., Wallach, J. D., Moher, D., Ioannidis, J. P. 2021; 18 (10): e1003844


    Florian Naudet and co-authors discuss strengthening requirements for sharing clinical trial data.

    View details for DOI 10.1371/journal.pmed.1003844

    View details for PubMedID 34695113

  • An open science pathway for drug marketing authorization-Registered drug approval PLOS MEDICINE Naudet, F., Siebert, M., Boussageon, R., Cristea, I. A., Turner, E. H. 2021; 18 (8): e1003726


    Florian Naudet and co-authors propose a pathway involving registered criteria for evaluation and approval of new drugs.

    View details for DOI 10.1371/journal.pmed.1003726

    View details for Web of Science ID 000683138300001

    View details for PubMedID 34370737

    View details for PubMedCentralID PMC8351924

  • Status, use and impact of sharing individual participant data from clinical trials: a scoping review BMJ OPEN Ohmann, C., Moher, D., Siebert, M., Motschall, E., Naudet, F. 2021; 11 (8): e049228


    To explore the impact of data-sharing initiatives on the intent to share data, on actual data sharing, on the use of shared data and on research output and impact of shared data.All studies investigating data-sharing practices for individual participant data (IPD) from clinical trials.We searched the Medline database, the Cochrane Library, the Science Citation Index Expanded and the Social Sciences Citation Index via Web of Science, and preprints and proceedings of the International Congress on Peer Review and Scientific Publication. In addition, we inspected major clinical trial data-sharing platforms, contacted major journals/publishers, editorial groups and some funders.Two reviewers independently extracted information on methods and results from resources identified using a standardised questionnaire. A map of the extracted data was constructed and accompanied by a narrative summary for each outcome domain.93 studies identified in the literature search (published between 2001 and 2020, median: 2018) and 5 from additional information sources were included in the scoping review. Most studies were descriptive and focused on early phases of the data-sharing process. While the willingness to share IPD from clinical trials is extremely high, actual data-sharing rates are suboptimal. A survey of journal data suggests poor to moderate enforcement of the policies by publishers. Metrics provided by platforms suggest that a large majority of data remains unrequested. When requested, the purpose of the reuse is more often secondary analyses and meta-analyses, rarely re-analyses. Finally, studies focused on the real impact of data-sharing were rare and used surrogates such as citation metrics.There is currently a gap in the evidence base for the impact of IPD sharing, which entails uncertainties in the implementation of current data-sharing policies. High level evidence is needed to assess whether the value of medical research increases with data-sharing practices.

    View details for DOI 10.1136/bmjopen-2021-049228

    View details for Web of Science ID 000692968300009

    View details for PubMedID 34408052

    View details for PubMedCentralID PMC8375721

  • Assessing the magnitude of reporting bias in a sample of recent systematic reviews-A comparison of a Prospero and Cochrane Sample Siebert, M., et al MetArXiv. 2021
  • The research output on interventions for the behavioural risk factors alcohol & drug use and dietary risk is not related to their respective burden of ill health in countries at differing World Bank income levels JOURNAL OF GLOBAL HEALTH Frassetto, C., Madera, M., Siebert, M., Megranahan, K., Roberts, D., Plugge, E. 2020; 10 (2): 020401


    Alcohol and drug use (A&D) and dietary risks are two increasingly important risk factors. This study examines whether there is a relationship between the burden of these risk factors in countries of specific income bands as defined by the World Bank, and the number of primary studies included in Cochrane Systematic Reviews (CSRs) conducted in those countries.Data was extracted from primary studies included in CSRs assessing two risk factors as outcomes. For each risk factor, data was obtained on its overall burden in disability-adjusted life years (DALYs) by World Bank Income Levels and examined for a link between DALYs, the number of primary studies and participants.A total of 1601 studies from 95 CSRs were included. Only 18.3% of the global burden for A&D is in high income-countries (HICs) but they produced 90.5% of primary studies and include 99.5% of participants. Only 14.2% of the dietary risk burden is in HICs but they produced 80.5% of primary studies and included 98.1% of participants.This study demonstrates the unequal output of research heavily weighted towards HICs. More initiatives with informed contextual understanding are required to address this inequality and promote health research in low and middle-income countries.

    View details for DOI 10.7189/jogh.10.020401

    View details for Web of Science ID 000612476300114

    View details for PubMedID 33110568

    View details for PubMedCentralID PMC7520876

  • Funders' data-sharing policies in therapeutic research: A survey of commercial and non-commercial funders PLOS ONE Gaba, J., Siebert, M., Dupuy, A., Moher, D., Naudet, F. 2020; 15 (8): e0237464


    Funders are key players in supporting randomized controlled trial (RCT) data-sharing. This research aimed to describe commercial and non-commercial funders' data-sharing policies and to assess the compliance of funded RCTs with the existing data-sharing policies.Funders of clinical research having funded at least one RCT in the years 2016 to 2018 were surveyed. All 78 eligible non-commercial funders retrieved from the Sherpa/Juliet Initiative website and a random sample of 100 commercial funders selected from pharmaceutical association member lists (LEEM, IFPMA, EFPIA) and the top 100 pharmaceutical companies in terms of drug sales were included. Thirty (out of 78; 38%) non-commercial funders had a data-sharing policy with eighteen (out of 30, 60%) making data-sharing mandatory and twelve (40%) encouraging data-sharing. Forty-one (out of 100; 41%) of commercial funders had a data-sharing policy. Among funders with a data-sharing policy, a survey of two random samples of 100 RCTs registered on, data-sharing statements were present for seventy-seven (77%, 95% IC [67%-84%]) and eighty-one (81% [72% - 88%]) of RCTs funded by non-commercial and commercial funders respectively. Intention to share data was expressed in 12% [7%-20%] and 59% [49%- 69%] of RCTs funded by non-commercial and commercial funders respectively.This survey identified suboptimal performances of funders in setting up data-sharing policies. For those with a data-sharing policy, the implementation of the policy in study registration was limited for commercial funders and of concern for non-commercial funders. The limitations of the present study include its cross-sectional nature, since data-sharing policies are continuously changing. We call for a standardization of policies with a strong evaluation component to make sure that, when in place, these policies are effective.

    View details for DOI 10.1371/journal.pone.0237464

    View details for Web of Science ID 000564315600059

    View details for PubMedID 32817724

    View details for PubMedCentralID PMC7446799

  • Data-sharing recommendations in biomedical journals and randomised controlled trials: an audit of journals following the ICMJE recommendations BMJ OPEN Siebert, M., Gaba, J., Caquelin, L., Gouraud, H., Dupuy, A., Moher, D., Naudet, F. 2020; 10 (5): e038887


    To explore the implementation of the International Committee of Medical Journal Editors (ICMJE) data-sharing policy which came into force on 1 July 2018 by ICMJE-member journals and by ICMJE-affiliated journals declaring they follow the ICMJE recommendations.A cross-sectional survey of data-sharing policies in 2018 on journal websites and in data-sharing statements in randomised controlled trials (RCTs).ICMJE website; PubMed/Medline.ICMJE-member journals and 489 ICMJE-affiliated journals that published an RCT in 2018, had an accessible online website and were not considered as predatory journals according to Beall's list. One hundred RCTs for member journals and 100 RCTs for affiliated journals with a data-sharing policy, submitted after 1 July 2018.The primary outcome for the policies was the existence of a data-sharing policy (explicit data-sharing policy, no data-sharing policy, policy merely referring to ICMJE recommendations) as reported on the journal website, especially in the instructions for authors. For RCTs, our primary outcome was the intention to share individual participant data set out in the data-sharing statement.Eight (out of 14; 57%) member journals had an explicit data-sharing policy on their website (three were more stringent than the ICMJE requirements, one was less demanding and four were compliant), five (35%) additional journals stated that they followed the ICMJE requirements, and one (8%) had no policy online. In RCTs published in these journals, there were data-sharing statements in 98 out of 100, with expressed intention to share individual patient data reaching 77 out of 100 (77%; 95% CI 67% to 85%). One hundred and forty-five (out of 489) ICMJE-affiliated journals (30%; 26% to 34%) had an explicit data-sharing policy on their website (11 were more stringent than the ICMJE requirements, 85 were less demanding and 49 were compliant) and 276 (56%; 52% to 61%) merely referred to the ICMJE requirements. In RCTs published in affiliated journals with an explicit data-sharing policy, data-sharing statements were rare (25%), and expressed intentions to share data were found in 22% (15% to 32%).The implementation of ICMJE data-sharing requirements in online journal policies was suboptimal for ICMJE-member journals and poor for ICMJE-affiliated journals. The implementation of the policy was good in member journals and of concern for affiliated journals. We suggest the conduct of continuous audits of medical journal data-sharing policies in the future.The protocol was registered before the start of the research on the Open Science Framework (

    View details for DOI 10.1136/bmjopen-2020-038887

    View details for Web of Science ID 000738373200104

    View details for PubMedID 32474433

    View details for PubMedCentralID PMC7264700