 OK, so today I'm going to be talking about Gaffer, which is both an open source initiative and it's use in production at Image Engine. We've written a full paper on this with a lot more detail than I could cover in a single talk. So please give the paper a read when you have time. It's on the ACM Digital Library if you follow this link. So what's Gaffer anyway? Simply put, it's a modular framework for building node-based tools and workflows. It provides a multi-threaded computation engine, which can be used in many domains. It builds on several best-in-class open source libraries and provides a front end for artisan TDs to interact with those libraries. We don't have time for a whole company reel, so hopefully this clip gives you an idea of what a studio can accomplish using Gaffer. This is a full CG jungle made in Jurassic World. It's created by the artisan TDs in Image Engine using Gaffer. The 3D scene and shaders were assembled as a Gaffer graph and streamed to 3D light using the Gaffer render man module. The generic plant motion is simulated in Houdini and baked to a library of caches. And those caches were instanced into the render scene using a mix of open source and proprietary Gaffer instancing tools. The volumes were simulated in Houdini as well, baked to VDB, and rendered via Gaffer using the same workflow as the Geo. Even the animation preview here is rendered using the Gaffer pipeline. So whenever we give presentations, there's always a bit of confusion about what's open source and what isn't. So I try to use icons in my presentations to clarify. So if you're ever wondering, you can look in the upper right corner, and there's either going to be the Gaffer G logo, which means that all tools and concepts on the slide are completely open source. Or you're going to see the Image Engine prawn, which means that there's something proprietary. Some of those slides might still contain open source tools. It's just that something there is proprietary. So here's an overview of the talk. I'm going to alternate between open source and image engine sections. So let's start with a little history. Many of the tools that we needed to make an image engine seem to be comprised of node graphs in one form or another. So it seemed handy if we had a general purpose framework for making those sorts of tools. John Hadden had been developing a framework like that privately, and he agreed to open source it if Image Engine was willing to contribute back. So as a no-brainer for IE really, we jumped several man years ahead in development without much cost. Management was already familiar with contributing to open source software. We'd been contributing to Cortex for years and seen those benefits materialize in production. At the time, we didn't know Gaffer would become so prevalent in our pipeline, but you never do really. So outside of Image Engine, Gaffer is an open source initiative, and the goals are a bit different. It's a bit of an experiment to see what would happen if we could squeeze all the best bits of open source into one open source application. And a key goal of the open source initiative is to have a polished standalone app for lighting and rendering. Currently, the development of that app is kind of piggybacking on Image Engine's need for the framework. So here's a quick look at the status of the standalone app. Everything shown here is open source, including the shaders, which are OSL shaders provided by Apple Seed. We even open source the robot character. He comes with Gaffer. You can download him with the binaries, and he's featured in the online tutorial. So in the paper, we introduced quite a few terms. It's kind of overwhelming, so we added a glossary at the end. All the modules on the left here are open source. They ship with Gaffer out of the box. And on the right, we have three key technologies, which are proprietary to Image Engine. I'll discuss those a bit later. So now I'm going to give a quick whirlwind explanation of how Gaffer works. It's described a lot better in the paper. There's also existing web videos from past presentations. And of course, all the code is on GitHub if you want to see the guts. So pretty much everything in Gaffer is a node or a plug. In this example, we're just using Gaffer to visualize some information. The nodes don't compute anything. The connections don't imply data transfer. It's just a convenient way to visualize the line of succession of the British throne. So nodes can be implemented in C++ or Python, but Python's terrible for anything performance critical. So in practice, we tend to implement everything in C++ and use Python for UI and glue code. So beyond basic nodes, we have compute nodes. These define three main methods, which allow the underlying computation engine to evaluate data in the graph. Each compute node has to define which input affects each output, how to compute each output, and then a quick hash so we can store things in memory and refetch them to avoid extra computes. Both the hash and the compute are performed relative to a unique context. So context are the key to parallel processing in Gaffer. Each thread can make its own context. So even though all threads are accessing the same graph, they can be computing different results for different points in time. Beyond time, contexts also provide completely arbitrary keys that can change, and we exploit that to enable multi-threaded image and scene processing, which I'll explain in a bit. So a quick note about multi-threading, it's the observer of the graph that's responsible for doing the threading. The viewer or the renderer backend or your own query function would be the thing that spawns the threads. The nodes themselves just have to guarantee thread safety. They can do their own threading internally. If the algorithm warrants it, so you might do that for like a geometry deformer. But typically we want to execute entire graphs in parallel rather than chunks of work in a single compute. So how does Gaffer perform multi-threaded image processing? In Gaffer, an image is just a compound plug, which contains a child plug for each aspect of the image. Computation of each child occurs separately. Computing the first four here is pretty easy. Most of the time they're just loaded out of the file and pass through the graph unchanged. But we don't want to compute the channel data, the pixels, for the whole image at once. We might only want to see part of the image and we definitely want to multi-thread it. Okay, so we split the channel data into tiles and compute them individually. Each tile can be computed on a separate thread and we use the context to identify which tiles being requested. So typically you fire up a bunch of threads, compute the tiles until you had everything you wanted. And that's exactly what the viewer does. And we do the exact same thing for 3D scenes. We have a scene plug with the child for each thing we might want to compute individually and we pull on the child plug we want using a context which specifies the location in the scene that we're interested in. This is what allows us to access something deep in the scene without first querying the parents and we can access different parts of the scene on different threads. So to clarify, scene processing is all provided by the Gaffer scene module but not by the core computation engine. But most of the hard work is done by the core. Each scene node is outputting an entire 3D scene and the scenes are completely independent. So between each node it's a completely independent 3D scene. Generation of any of those scenes can be multi-threaded but it's up to the observer, each UI pane, to spawn the threads and traverse the scene itself. So in the bottom right you can see the hierarchy of the scene output by the selected node and the properties of the scene as well. So viewing scenes is all well and good but you probably also want to generate some pictures and store them somewhere on your file system. So this is where the Gaffer dispatch module comes in. It provides the concepts of tasks and execution dependencies which are again unique per context. So here's an example of an Apple seed render and two wedge nodes. The wedge nodes modify the context of the render. So one submission of this graph generates 25 unique images. Gaffer ships with dispatchers for execution on your local machine or to a render farm using tractor. And it's worth noting that the Gaffer dispatch module has no dependency on scenes or images. You can use it to dispatch arbitrary shell commands or Python commands straight to tractor. So now I'm gonna go through some production workflows at Image Engine so we can see how the Gaffer frameworks are integrated in the studio pipeline. So first it's important to understand the type of work that Image Engine takes on. We're primarily a VFX studio with a reputation for a creature and character work. But the size and scope of our projects varies drastically in any given year. We also do high-end TV like Game of Thrones. We do the occasional commercial. And more recently we've just finished our first foray into feature animation delivering a 20 minute sequence on Final Fantasy 15. We don't have the development resources of a huge company so we needed a unified pipeline that's flexible enough to take on that variety of work. Ubuka, our asset management system, provides a big part of that flexibility. Ubuka is built on top of Gaffer and it uses Gaffer graphs internally for a number of operations. We presented the basics of Ubuka in 2013 at a previous Digipro conference. So I'll focus on what's new since then. In this paper we introduce the concepts of bundles, workflow templates and bundle render profiles. So bundles are department specific groupings of asset components that belong in each shot. CG supervisors build workflow template graphs which use bundles to define the relationship between departments on their show. This image shows a default workflow for a VFX film highlighting inputs to lighting. It's important to note that no data is being computed here. This is merely an informational template that Ubuka queries from time to time. When an artist publishes an output bundle all the downstream input bundles become out of date until other artists choose to pull those new components into their bundles. Workflow templates can be configured per show but also per sequencer shot. Here we're seeing a more complex template which is fetching a sequence level light rig and effects elements and adding them into the shot level bundles. It also has a custom effects bundle which isn't used by any artist. It's just used to co-late sequence and shot level effects together for QC purposes. So here we see the creature effects input bundle being loaded in Maya. There are status icons to indicate out of date inputs and other information for the artist. This is all generated, all these icons are generated by querying the workflow template graphs as well as what's loaded in the scene. And nearly every shot department at Image Engine works in a similar way using bundles. When an artist is ready to publish their work they're presented with a UI to define what belongs in their output bundle. They then select a bundle render profile which is used for the QC render. As you select different profiles you expose different settings to give fine control over the QC process. Here we're seeing the primary creature effects BRP used on Final Fantasy 15. On the right we see the exposed settings presented to the user at submission time and diving inside we see what it's doing in detail. We're pulling in the entire 3D scene as assembled by Yubuca. Assigning a bunch of look dev. We have three output render passes, a regular beauty pass, a Heragide curve pass with a live slap comp over the beauty and a cloth intersection pass. We also have three dailies, one for each render pass and looking closer at the cloth intersection pass we see a bunch of different colors assigned to different bits of cloth for debug purposes. So here we see the results of that submission being the various dailies uploaded to shotgun. On the top left is the beauty render from that BRP and the top right is the cloth pass and on the bottom I have the final comp there. Approving any of the BRP dailies in shotgun would trigger an event that approves the associated bundle in Yubuca and vice versa. This approval is what triggers the out-of-date flags that the other departments see on their input bundles. So far we've seen a few different uses of gaffer in production but I suppose you're probably wondering how we made all that look dev I glossed over. So that's where Caraboo comes in. It provides deep integration between gaffer and DCC applications like Maya. Caraboo represents a sweet spot between traditional modeling animation applications and procedural scene generation applications. It allows us to tackle those fast turnaround projects using the same pipeline and workflows as our huge VFX and feature animation shows all within applications familiar to our artists. Look development works in Caraboo to build and assign shader networks procedurally. Here we're using a custom built layered material and a library of RSL co-shaders but gaffer would let you do the same thing with OSL or with Arnold shaders as well. Here on the left we see the final look digraph for a hero alien from Independence Day Resurgence. Look development publishes this network as a box shown in the bottom right and exposes control via promoted plugs shown in the top right. The primary controls exposed here are LOD, state and render quality in terms of SSS specifically and less prominent controls are exposed on subsequent tabs for displacement blocking passes and so on. Here's a turntable of that alien in the split state using a studio light rig. The look dev team also handles hair shading in a similar way. The groom is exported from Yeti in the rest pose and loaded into form in Caraboo using Grizzly. Grizzly is a plugin that we built on top of gaffer scene which allows you to define custom nodes and procedurally deform existing geometry. Gaffer also provides some simpler deformation tools out of the box using OSL as a deformer language. So now that look devs boxed up and published to Ibuka the lighting team loads references to that look dev. They can alter the look per shot via the exposed plugs or via downstream overrides. Lighting can also create templates which are startographs that are automatically generated when beginning new shots. They typically would branch the scene into separate render passes based on content with some custom settings per branch. This image is showing the template from Game of Thrones where we've got separate branches for beauty reflection and shadow passes and a live slap comp over the plate. Lighters can load sequence level light rigs or create new lights in shot. They can manipulate the lights in Maya or in Caraboo. They can create and deform new geometry on the fly using the standard Maya sculpting tools and they can tweak those things at the last second using the full suite of those Maya tools. So here I'm running an IPR and I'm deforming the laser beam geometry in Maya and having the Caraboo IPR update on the fly. And this is the sort of feature that we think is very useful for fast turnaround shows and episodic TV. So for Final Fantasy 15, the lighting team decided to take advantage of the automation tools in Yubuca. So production was tagging key shots and child shots throughout the sequences. Lighters built setups for each key shot and auto rendered the child shots by hijacking the QC system. They built bundle render profiles containing key shot light rigs, look dev, render passes, slap comps and even bundle publisher nodes themselves. And whenever a key shot was approved they could auto generate publish renders for the child shots right along with the QC and approval dailies. With everything prepackaged and ready to go for the comp team as soon as the QC dailies approved. So here's the QC output of the lighting BRP for one shot in a big fight sequence. All the renders are using gaffer in three to light directly but what we're seeing here is actually a deep comp generated by Nuke. This was launched from a task node within the BRP itself to load Nuke in the background and dynamically layer the passes for QC. And here's the final comp with all renders pulled in from the lighting BRP. It's obviously been altered by comp quite a bit incorporating 2D elements and rebalancing the look. So now I'm gonna put my open source hat back on and acknowledge some of the contributors around the world. John Haddon's the originator of gaffer and he remains the principal architect today. Image engines committed to gaffer development and actively contributing to the project on a daily basis. Hugh McDonald at invisible has made significant contributions to the gaffer image module. And he's currently working to extend image plugs to support both flat and deep data simultaneously. Esteban Tobogliari at Apple seed HQ has contributed a complete integration of Apple seed which is a physically based open source renderer of full OSL support. Thanks to his hard work, users can download and start rendering straight away in gaffer. And most recently, Cynosite's been thoroughly evaluating gaffer for use at their London and Montreal facilities. It's worth noting that Cynosite and Image Engine are partner companies so they have access to Caribou as well which might sweeten the pot for them. So what's next for the open source project? There's still some key UI features that are missing manipulators, keyframe editor, light linking. It'd be good to show off some other open source tech that we haven't integrated yet like OpenSubdiv, OpenVDB, USD. Support more renders would always be great. Kind of listed some of the obvious choices up there. And maybe cloud dispatching to Amazon or Google, that'd be pretty cool to have. So now I'm gonna do a little sneak peek for what's next at IE. It's important to note, these things are in use on current productions but they're not in the paper. We'll write another one if there's interest if they work out in production. So Cynodes compute 3D scenes and ImageNodes compute 2D images but these green nodes are Yubuca entity nodes and they compute database queries. So the data flowing through the green plugs is a Yubuca URL which represents the result of those queries. You can view the results in the Yubuca Inspector at the bottom right of the video. Here we're loading an animation output bundle, querying the contents. Filtering the URLs by type, purpose and name and then dynamically converting those URLs into scenes and images. We expect these nodes to enable artists and TDs to create new automation tools and to facilitate new multi-shot workflows. So let's take a closer look at the scene builder. The input is a list of Yubuca URLs and the output is a fully constructed 3D scene. To understand what's going on we need to dive inside. So the guts of the scene builder is a gaffer scene loop which is an open source node that loops over its inputs and here we're using the loop to loop over input URLs, pipe each one into geometry reader and parent that back into the main scene. So inside the geometry reader, we actually have a complex network as well and this is to allow for the way we choose to store volumes and particles at image engine so we need extra loops to account for variations of those geometry. So now hopefully we still have some time for questions and answers. I'm gonna start by asking myself a few common questions that I've heard on the dev list quite often or in the pub. So what can gaffer do for me? Can it be a standalone lighting tool? Probably not quite yet. It's missing those key UI features I was mentioning and it's probably worth noting those need to be a bit of an open source effort because image engine has caribou for that so it's not yet a production priority. Can it be a standalone comp tool? Well, maybe for automated processes like dailies and slates and outputs but we don't really ever intend it to be a pro compositing tool. So how can I use gaffer today? Well, you can use it as a framework for any node-based development you've been considering but maybe more practically, more immediately. It can be a TD tool for managing task submissions to your render farm so you can download gaffer, build complex shell commands and send them to Tractor. Is it hard to use? Well, you can just download the binaries rather than compiling from scratch. It's gonna be a lot easier. We have a quick start tutorial which takes you start to finish, look-dev and rendering in Apple seed with a production-level character and shader setup. Can you make custom nodes? Yeah, of course, but you might want to consider making them as a plugin first so you can just download the binaries and link to it. We ship the headers. You don't need to compile gaffer to make a plugin. Can I change the guts? Now you need to fork it on GitHub and compile away but still we have pre-compiled dependencies so you're gonna have an easier time if you download the pre-compiled dependencies and then you only have to recompile gaffer to change the guts. So yeah, I can ask us questions on the dev list. We're on there every day. That's it. Thank you.