Disk-Based Parallel Computation, Rubik's Cube, and Checkpointing





The interactive transcript could not be loaded.



Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Mar 25, 2008

Google Tech Talks
March, 24 2008


This talk takes us on a journey through three varied, but interconnected
topics. First, our research lab has engaged in a series of disk-based
computations extending over five years. Disks have traditionally
been used for filesystems, for virtual memory, and for databases.
Disk-based computation opens up an important fourth use: an abstraction
for multiple disks that allows parallel programs to treat them in a
manner similar to RAM. The key observation is that 50 disks have
approximately the same parallel bandwidth as a _single_ RAM subsystem.
This leaves latency as the primary concern. A second key is the use
of techniques like delayed duplicate detection to avoid latency. For
example, hash accesses accesses can be saved (even saved on disk), until
there are sufficiently many pending accesses to use standard streaming
techniques. We have designed a library for search problems that exploits
the high parallel bandwidth while hiding the latency. We build
abstractions for search that employ parallel disk-based hash arrays
with the same speed as a single hash array in a single RAM subsystem.
In the case of Rubik's cube, we exploited this mechanism by using
seven terabytes of distributed disk in a search problem that showed
that 26 moves suffice to solve Rubik's cube. Our initial efforts
emphasize idempotent operations, so that we can easily recover from
hardware or software faults. We next intend to apply a more general
solution for fault recovery: checkpointing. This separate effort
in our lab has now produced a mature, robust user-level checkpointing
program has now matured. The package works successfully in tests
on OpenMPI, MPICH-2, OpenMP, and parallel iPython (used in SciPy and
NumPy). Our DMTCP package transparently checkpoints parallel,
multi-threaded processes, with no modification either to the
operating system or to the application binaries. Extrapolating
from current experiments, we estimate that we can checkpoint a 1,000
node parallel computation in a matter of minutes. We are currently
searching for a testbed on which to demonstrate this scalability.

Speaker: Gene Cooperman


When autoplay is enabled, a suggested video will automatically play next.

Up next

to add this to Watch Later

Add to

Loading playlists...