Loading...

Andrew Davison - Reproducible Research Workshop 2011

642 views

Loading...

Loading...

Transcript

The interactive transcript could not be loaded.

Loading...

Loading...

Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Oct 18, 2011

Andrew Davison speaks on "Automated tracking of scientific computations", at the Applied Mathematics Perspective Workshop on "Reproducible Research: Tools and Strategies for Scientific Computing" in Vancouver, July 14, 2011.

Workshop website, including slides: http://stodden.net/AMP2011

Talk Abstract: "Reproducibility of experiments is one of the foundation stones of science. A related concept is provenance, being able to track a given scientific result, such as a figure in an article, back through all the analysis steps (verifying the correctness of each) to the original raw data, and the experimental protocol used to obtain it. In computational, simulation- or numerical analysis-based science, reproduction of previous experiments, and establishing the provenance of results, ought to be easy, given that computers are deterministic, not suffering from the problems of inter-subject and trial-to-trial variability that make reproduction of biological experiments, for example, more challenging. In general, however, it is not easy, due to the complexity of our code and our computing environments, and the difficulty of capturing every essential piece of information needed to reproduce a computational experiment using existing tools such as spreadsheets, version control systems and paper notebooks.

To ensure reproducibility of a computational experiment we need to record: (i) the code that was run, (ii) any parameter files and command line options, (iii) the platform on which the code was run, (iv) the outputs. To keep track of a research project with many hundreds or thousands of simulations and/or analyses, it is also useful to record (i) the reason for which the simulation/analysis was run and (ii) a summary of the outcome of the simulation/analysis. Recording the code might mean storing a copy of the executable, or the source code (including that of any libraries used), the compiler used (including version) and the compilation procedure (e.g. the Makefile, etc.) For interpreted code, it might mean recording the version of the interpreter (and any options used in compiling it) as well as storing a copy of the main script, and of any external modules or packages that are included or imported into the script file. For projects using version control, "storing a copy of the code" may be replaced with "recording the URL of the repository and the revision number". The platform includes the processor architecture(s), the operating system(s), the number of processors (for distributed simulations), etc.

In developing a tool for tracking simulation experiments/computational analyses, something like an electronic lab notebook for computational science, there are a number of challenges: (i) different researchers have very different ways of working and different workflows: command line, GUI, batch-jobs (e.g. in supercomputer environments), or any combination of these for different components (simulation, analysis, graphing, etc.) and phases of a project; (ii) some projects are essentially solo endeavours, others collaborative projects, possibly distributed geographically; (iii) as much as possible should be recorded automatically. If it is left to the researcher to record critical details there is a risk that some details will be missed or left out, particularly under pressure of deadlines.

In this talk I will present the solution we are developing to the challenges outlined above. Sumatra consists of a core library, implemented in Python, on which is built a command line interface for launching simulations/analyses with automated recording of provenance information and a web interface for managing a computational project: browsing, viewing, and annotating simulations/analyses.

Sumatra (i) interacts with version control systems, such as Subversion, Git, Mercurial, or Bazaar, (ii) supports launching serial or distributed (via MPI) computations, (iii) links to data generated by the computation, (iv) aims to support all and any command-line drivable simulation or analysis program, (v) supports both local and networked storage of information, (vi) aims to be extensible, so that components can easily be added for new version control systems, etc., (vii) aims to be very easy to use, otherwise it will only be used by the very conscientious."

Loading...

When autoplay is enabled, a suggested video will automatically play next.

Up next


to add this to Watch Later

Add to

Loading playlists...