Reproducibility crisis in science: causes and possible solutions
Drimer-Batca, Daniel Alexandru
MetadataShow full item record
Part I. Claims to knowledge require justification. In science, such justification is made possible by the ability to reproduce or replicate experiments, thereby confirming their validity. Additionally, reproducibility serves as a self-correcting tool in science as it weeds out faulty experiments. It is therefore essential that experimental studies be replicated and confirmed. Recently, attempts to reproduce studies in several fields have failed, leading to what has been referred to as "a crisis of reproducibility." This crisis is largely a result of the current culture in the scientific world. Specifically, it is a result of a system that incentivizes individual success in the form of publications in high-impact journals over collaboration and careful conductance of research. This environment contributes to the crisis of reproducibility by increasing biases, incentivizing researchers to engage in manipulative statistics, decreasing quality control and transparency, and increasing the likelihood of researchers engaging in fraudulent behavior. Possible solutions to the problem of irreproducibility could tackle individual factors. A more prudent approach would be to focus on changing the current culture in the scientific world. Increased transparency had been suggested as a way to solve this problem. There is currently a movement advocating for increased transparency in science through "open science." Part II. Retraction of scientific papers due to evidence of research misconduct is on the rise, having increased tenfold from 2000 to 2009. Previous work on this topic focused on published retraction notices, using notices to identify the percent of retracted articles that were caused by research misconduct. This study utilized a different approach. Using the Office of Research Integrity database, we first identified publications that resulted from research misconduct. We then searched those articles to determine whether they were indeed retracted. Once retraction notices were identified, they were scored based on scoring elements reflecting guidelines for transparency. Lastly, we investigated whether a correlation exists between the quality of a retraction notice and journal impact factor. Our findings suggest that 21% of papers containing data derived from scientific misconduct are not retracted. Moreover, the quality of retraction notices varies, with some elements more likely to be present than others. No significant correlation between retraction notices and journal impact factor was found.