Parallel Python: Analyzing Large Datasets Intermediate | SciPy 2016 Tutorial | Matthew Rocklin & Mi




Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Jul 17, 2016

Students will walk away with a high-level understanding of both parallel problems and how to reason about parallel computing frameworks. They will also walk away with hands-on experience using a variety of frameworks easily accessible from Python.

For the first half, we will cover basic ideas and common patterns encountered when analyzing large data sets in parallel. We start by diving into a sequence of examples that require increasingly complex tools. From the most basic parallel API: map, we will cover some general asynchronous programming with Futures, and high level APIs for large data sets, such as Spark RDDs and Dask collections, and streaming patterns. For the second half, we focus on traits of particular parallel frameworks, including strategies for picking the right tool for your job. We will finish with some common challenges in parallel analysis, such as debugging parallel code when it goes wrong, as well as deployment and setup strategies.

Part one: We dive into common problems with a variety of tools

1. Parallel Map
2. Asynchronous Futures
3. High Level Datasets
4. Streaming

Part two: We analyze common traits of parallel computing systems.

1. Processes and Threads. The GIL, inter-worker communication, and contention
2. Latency and overhead. Batching, profiling.
3. Communication mechanisms. Sockets, MPI, Disk, IPC.
4. Stuff that gets in the way. Serialization, Native v. JVM, Setup, Resource Managers, Sample Configurations
5. Debugging async and parallel code / Historical perspective

We intend to cover the following tools: concurrent.futures, multiprocessing/threading, joblib, IPython parallel, Dask, Spark


When autoplay is enabled, a suggested video will automatically play next.

Up next

to add this to Watch Later

Add to

Loading playlists...