Dr. Thibault studies how to increase the rigour and reproducibility of scientific research. His work focuses on developing and evaluating solutions to shortcomings in the research ecosystem. He completed a PhD in cognitive neuroscience at McGill University in 2019. His doctoral work focused on brain imaging, including neurofeedback, placebos, and suggestion. This work is outlined in his book, Casting Light on The Dark Side of Brain Imaging, co-edited with Dr. Amir Raz. He then worked as a postdoctoral researcher at the University of Bristol University before joining the Meta-Research Innovation Center at Stanford University (METRICS) in 2021. His publications are available at https://scholar.google.ca/citations?user=zI1x2UYAAAAJ&hl=en&oi=ao
Post-publication critique at top-ranked journals across scientific disciplines: a cross-sectional assessment of policies and practice.
Royal Society open science
2022; 9 (8): 220139
Journals exert considerable control over letters, commentaries and online comments that criticize prior research (post-publication critique). We assessed policies (Study One) and practice (Study Two) related to post-publication critique at 15 top-ranked journals in each of 22 scientific disciplines (N = 330 journals). Two-hundred and seven (63%) journals accepted post-publication critique and often imposed limits on length (median 1000, interquartile range (IQR) 500-1200 words) and time-to-submit (median 12, IQR 4-26 weeks). The most restrictive limits were 175 words and two weeks; some policies imposed no limits. Of 2066 randomly sampled research articles published in 2018 by journals accepting post-publication critique, 39 (1.9%, 95% confidence interval [1.4, 2.6]) were linked to at least one post-publication critique (there were 58 post-publication critiques in total). Of the 58 post-publication critiques, 44 received an author reply, of which 41 asserted that original conclusions were unchanged. Clinical Medicine had the most active culture of post-publication critique: all journals accepted post-publication critique and published the most post-publication critique overall, but also imposed the strictest limits on length (median 400, IQR 400-550 words) and time-to-submit (median 4, IQR 4-6 weeks). Our findings suggest that top-ranked academic journals often pose serious barriers to the cultivation, documentation and dissemination of post-publication critique.
View details for DOI 10.1098/rsos.220139
View details for PubMedID 36039285
Discrepancy review: a feasibility study of a novel peer review intervention to reduce undisclosed discrepancies between registrations and publications
ROYAL SOCIETY OPEN SCIENCE
2022; 9 (7): 220142
Undisclosed discrepancies often exist between study registrations and their associated publications. Discrepancies can increase risk of bias, and when undisclosed, they disguise this increased risk of bias from readers. To remedy this issue, we developed an intervention called discrepancy review. We provided journals with peer reviewers specifically assigned to check for undisclosed discrepancies between registrations and manuscripts submitted to journals. We performed discrepancy review on 18 manuscripts submitted to Nicotine and Tobacco Research and three manuscripts submitted to the European Journal of Personality. We iteratively refined the discrepancy review process based on feedback from discrepancy reviewers, editors and authors. Authors addressed the majority of discrepancy reviewer comments, and there was no opposition to running a trial from authors, editors or discrepancy reviewers. Outcome measures for a trial of discrepancy review could include the presence of primary or secondary outcome discrepancies, whether publications that are not the primary report from a clinical trial registration are clearly described as such, whether registrations are permanent, and an overarching subjective assessment of the impact of discrepancies in published articles. We found that discrepancy review could feasibly be introduced as a regular practice at some journals interested in this process. A full trial of discrepancy review would be needed to evaluate its impact on reducing undisclosed discrepancies.
View details for DOI 10.1098/rsos.220142
View details for Web of Science ID 000885769400006
View details for PubMedID 35911195
View details for PubMedCentralID PMC9326291
- A many-analysts approach to the relation between religiosity and well-being RELIGION BRAIN & BEHAVIOR 2022
- Excess significance and power miscalculations in neurofeedback research. NeuroImage. Clinical 2022: 103008
- Rigour and reproducibility in Canadian research: call for a coordinated approach FACETS 2022; 7: 18-24
- Citation Patterns Following a Strongly Contradictory Replication Result: Four Case Studies From Psychology ADVANCES IN METHODS AND PRACTICES IN PSYCHOLOGICAL SCIENCE 2021; 4 (3)
Estimating the Prevalence of Transparency and Reproducibility-Related Research Practices in Psychology (2014-2017).
Perspectives on psychological science : a journal of the Association for Psychological Science
Psychologists are navigating an unprecedented period of introspection about the credibility and utility of their discipline. Reform initiatives emphasize the benefits of transparency and reproducibility-related research practices; however, adoption across the psychology literature is unknown. Estimating the prevalence of such practices will help to gauge the collective impact of reform initiatives, track progress over time, and calibrate future efforts. To this end, we manually examined a random sample of 250 psychology articles published between 2014 and 2017. Over half of the articles were publicly available (154/237, 65%, 95% confidence interval [CI] = [59%, 71%]); however, sharing of research materials (26/183; 14%, 95% CI = [10%, 19%]), study protocols (0/188; 0%, 95% CI = [0%, 1%]), raw data (4/188; 2%, 95% CI = [1%, 4%]), and analysis scripts (1/188; 1%, 95% CI = [0%, 1%]) was rare. Preregistration was also uncommon (5/188; 3%, 95% CI = [1%, 5%]). Many articles included a funding disclosure statement (142/228; 62%, 95% CI = [56%, 69%]), but conflict-of-interest statements were less common (88/228; 39%, 95% CI = [32%, 45%]). Replication studies were rare (10/188; 5%, 95% CI = [3%, 8%]), and few studies were included in systematic reviews (21/183; 11%, 95% CI = [8%, 16%]) or meta-analyses (12/183; 7%, 95% CI = [4%, 10%]). Overall, the results suggest that transparency and reproducibility-related research practices were far from routine. These findings establish baseline prevalence estimates against which future progress toward increasing the credibility and utility of psychology research can be compared.
View details for DOI 10.1177/1745691620979806
View details for PubMedID 33682488