Researchers are competing for positions, grant money, and status. In this competition, researchers can gain an unfair advantage by using questionable research practices (QRPs) that inflate effect sizes and increase the chances of obtaining stunning and statistically significant results. To ensure fair competition that benefits the greater good, it is necessary to detect and discourage the use of QRPs. To this aim, I introduce a doping test for science; the replicability index (R-index), a quantitative measure of research integrity that can be used to evaluate the statistical replicability of a set of studies (e.g., journals, individual researchers’ publications). I first discuss existing approaches to the detection of biased results and point out their limitations as a measure of scientific integrity. I then show how the R-index reveals the increase in the use of QRPs by comparing the R-index in the Journal of Abnormal and Social Psychology in 1960 to the R-index in the Attitudes and Social Cognition section of the Journal of Social and Personality Psychology in 2011. I then use the R-Index to predict the success rate in empirical replications in the Open Science Project and the Multiple Labs Project. Like doping tests in sports, the availability of a scientific doping test should deter researchers from engaging in practices that advance their careers at the expense of everybody else. Demonstrating replicability should become an important criterion of research excellence that can be used by funding agencies and other stakeholders to allocate resources to research that advances science.