The first step is to enter in the metadata around your upload. The title and tags that categorize the work. You can also put in some citation information and a long-form abstract to describe the work. The first step is to record the metadata.
|Published (Last):||24 January 2009|
|PDF File Size:||20.96 Mb|
|ePub File Size:||7.63 Mb|
|Price:||Free* [*Free Regsitration Required]|
Researchers spend months — sometimes years — collecting and cleaning data, writing and debugging computer code, and then running and rerunning their work.
Yet those data and code never enter the peer-review process. No wonder, you might argue, that reproducibility is not the norm in modern science. Over the past decade, some journals have begun to instruct authors to upload their code and data to dedicated online repositories after the acceptance of the paper, so that, in principle, other researchers can download all the necessary resources to redo their analysis. However, such initiatives have been only partially successful in improving transparency.
There are two main reasons. First, the posted code and data are not checked systematically. Their quality, therefore, is sometimes low — particularly because researchers lack time and incentives to prepare them properly. This makes it hard even for specialists to redo the analysis and fully reproduce an original study.
Second, an increasing number of academic papers rely on confidential data relating to individuals; examples include data on income, employment, taxes and health. These are available only to accredited users within a secure computing environment and cannot be shared. In some cases, an anonymised version of the data can be made public, but recent evidence suggests that this approach is not yet able to provide a guarantee that privacy is preserved.
A paper recently published in Nature Communications shows that That well-trained researchers are sometimes unable to replicate the results of papers published in their field is a serious concern and calls for action. The journal Biostatistics has been implementing such a verification process for several years, and the American Economic Review recently announced that it is about to do the same.
Many journals, however, lack the time or specialised staff to deal with numerous software and data sources. As an alternative, we advocate an external solution provided by a specialised certification agency, acting as a trusted third party.
To this end, we recently launched cascad , the Certification Agency for Scientific Code and Data, as a non-profit academic initiative. When a researcher requests a reproducibility certificate, a cascad reviewer runs their code on their data to verify that the output corresponds to the results presented in the tables and figures in their manuscript.
The certificate can then be submitted to journals alongside the manuscript, giving the editor and reviewers confidence that the paper is all that it seems. Another key advantage of a trusted third party is its ability to certify the reproducibility of research based on confidential data. The centre creates a virtual machine allowing researchers to remotely access the specific datasets needed for their projects, as well as the required statistical software.
The cascad reproducibility reviewer then accesses a virtual machine that is a clone of the one used by the author same data, same code , and the whole process is fully conducted within the secure computing environment. Making research reproducible calls for more joint efforts such as this between academic journals, researchers and data providers. Taking reproducibility seriously is a prerequisite for making science trustworthy and useful to society. Read more about.
Data police force will help clean up research
MONSIEUR CHRISTOPHE HURLIN
The dynamic panel bias The Instrumental Variable IV approach
Details about Christophe Hurlin