 Welcome everybody to the PDG in the Pipeline webinar, a Lume webinar with Jeff Wagner, Ken Drew, and some special guests later on in the webinar. Looking forward to that. We're going to start with sort of a high level overview. Jeff is going to take it there. Save up your questions, or you can put your questions into the queue along the way, but we'll be answering them at a certain point when Jeff has done that segment. And then we'll also have Ken and his dev team answering specifically technical questions after Jeff has done as well. So without further ado, over to Jeff. Yep, thank you. So I'm sure that a lot of you have already seen a lot of the presentation videos on PDG, and you're quite a well aware of, you know, you might have gone through some of the tutorials with PDG or not. But what we're going to cover this morning, or today, is having a look at why did we do PDG, what problems exist in pipelines, and how we designed the PDG to address a lot of these issues. And then afterwards, we'll ask it off the baton to Ken and his team and answer any and all questions related to PDG. So let's go. So it all comes back to what is a pipeline? And pipeline has been many things to many people. Are pipelines data-centric? Are they approval-centric? There are that and a lot more. And you'll see, I've been to a lot of pipelines, and you've seen lots and lots of pipeline diagrams. And I'm going to not show a single pipeline diagram, because that's not what the real key issue is with regards to pipelines. The real issue is facilities and users' endless redesign of your pipelines with really small incremental gains over the last few years. And there was a lot of keying on data formats, data formats, data formats, and approvals. So nothing's really revolutionary changed in the last number of years. You could say Alembic is the data format might have helped a lot. But in essence, pipelines really haven't matured that much over the last few years. So pipeline applications applies to all facets of digital. We're talking about all of the digital content creation tools in film, games, advertising, and VR and AR. And we all know that pipelines now are stressed. We hear about crunch all the time. And crunch means users making up for mismanagement within pipelines or requirements that have gone well beyond the original design intent. And as we know, a lot of pipelines are designed to be linear. There's no proceduralism in them whatsoever. So when you need more assets, you need more people, or you need more time. So it's either one. And it turns up being all of it. And we also see huge data size requirements growing dramatically. And layouts and environments are heavily stressed. That goes for all departments. And larger teams are required to fill all these asset needs. Quality control, it's very, very challenging in current pipelines. And changes, many times, changes will push deadlines and require a lot of rework. So the big problem that we have is cash-based pipelines. It's also a pro and a con, as we'll take a look in a bit. Departments pass cash data down the pipe. And it's always driven by approvals. And departments try to be self-dependent because they're basically in survival mode. So their form of communication with other pipelines are cash data and a ton of metadata. And within these departments, they'll use dedicated skills and tools. And these skills and tools are working around the very specific tasks that they're trying to do. Now, we know that in pipelines that are being designed right now, we're starting to see a demand for these tools going beyond the current department that they're being used in. For instance, using some sort of visual effects tool inside of lighting, let's say, to do an in-camera placed tool. Or character effects work now having to merge in with actual effect shots. I mean, these are the sorts of collapsing pipelines that we're seeing all around us. So the department design-based pipelines are starting to be really stressed now. I mean, if you're in dailies and you've got a character effect with a cloth blowing against the wind that's coming from a pyro simulation and you have both the effects department and the character effects department in the same dailies room, what is it gonna break out into a fist fight? Who's gonna deal with what issues? So pipelines are definitely being stressed. And feedback within departments is very strong. But the main language between departments in facilities and pipelines is cashed assets with tons of metadata, more and more metadata trying to work and it's worked for years. So the pros are it's proven workflow and there's many in-house and third-party tools to support this working. The most important thing is with approvals and that's one thing you must never forget in pipelines is, is this a good asset or is it not? Is it meet the standards to move forward? So once it is, you lock the asset down and you pass it on. So this simple pass-reject test for assets lock inversion works very well. It's interesting to see how the PDG actually has this built-in right inside of its genetics and we'll take a look at that a bit later. But once these assets are approved, they're good to propagate to other pipeline. So that's, so the cons is it scales linearly. No smarts beyond the current department. So even if the department itself is procedural, the cash data formats really have no procedural smarts to them whatsoever. Generally it's cast as Olympics or as VDBs or some other format. So you throw resources to meet deadlines and the rework is the big problem with proceduralism as well. So if proceduralism is stuck in the department, reworking still has to be done in that department. And the large amount of disk size required to hold these approved caches is growing and growing exponentially. And you need to keep these caches around, especially if you have approvals happening very late in your design game or design film or game depending on how it goes. So the trick is to get smart pipelines within departments. So current departments are smart within their own ecosystem, but dumb between departments. And I've visited many, many different pipelines have gotten different pipeline groups together and it's always interesting to see how brief the communication is between various departments and facilities. And the departments become entrenched and they say, well, these are my assets, they're approved, once they're out of my hands, I don't wanna see them anymore until there's a change. And if there's a change, well, then you're gonna have to schedule that and we're gonna have to fit it in in what we're doing. So it's very difficult and challenging once these assets live outside of the departments. And it's getting better, but the sheer amount of data on the stress on the global pipeline is big issues. And now you, when we're dealing with pipelines, your first knee-jerk reaction is, oh, pipelines are data-centric. And that's one thing that we saw at SideFX is when we were looking at, you know, what is this, how do we represent data and how does it work? We saw a lot of pipelines that were fixed on formats on the caches because remember, you're living and dying by these caches and a lot of pipelines. So the Olympic current industry standard, scenery format has worked fairly well. It's purely as a cache format and a good one. It pushes the usability of cache data and pipelines as far as you can possibly go. It defers, supports deferred loading of cache data, which is really good property to have. So that means you're only pulling the data off a disk when you need it. And also the loading and unloading of this data is threaded, so it gives you threaded evaluation. And it's also very well designed and efficient at supporting deforming caches. So you can have a high-res geometry with all your attributes and then you can generate a point-to-form cache that very efficiently supports that within pipelines. And it also supports motion blur segments, brute force, mind you, in that you actually have to literally put into your Olympic format all of those motion-sampled snap shots as well. So there's no curve in spline interpolation inside of Olympic at this point in time. And the most important thing in the very bottom, no procedural primitive support inside of Olympic. So that means that Olympic is great and supports that per department workflow where departments have to cache data out and then you pass it on and no procedural primitive support. Although having said that, if you do start to introduce procedurals inside of a native data format, such as Olympic or what's about to come USD, then you need to be able to read that procedural primitive and expand it whenever you need to expand it. And that is the big issue there. So again, which we're gonna take a look with the PDG, what it does to solve that is it's data agnostic and you can actually represent any application as a node inside of the top network. So it gives you a really great way of encapsulating parts of a pipeline with all of its various dependencies. But I'm a bit jumping the gun though, because we're gonna be talking more about that later. So formats and cache pipelines in USD. USD, up-and-coming, scene-hired cache data format. It's no surprise, probably no surprise to anybody here that USD is of great interest to SideFX. And there's a lot of behaviors in USD that seem to address asset reuse inside of pipelines. But at the end of the day, it is still a cached data format. But it also comes with some really nice benefits such as native support for levels of detail. It has a lot of nice built-in features to support cached data within the various departments. You can add variations of each asset. And one of the nice features of USD is once an asset is on the stage, you cannot delete it or remove it, you can only hide it, which is great for asset tracking. So that means if you change your mind halfway through those assets, you can't just call them, they just become hidden, but they're always still there. So you have a recipe or a history as to how you got to where you are right now. And again, procedural primitive, three question marks there. We don't know. Will USD have a procedural payload with a parameter interface? We don't know at this point in time. But as I go forward this discussion, you'll start to see how not putting a procedural primitive inside of USD would be a huge mistake because once you do that, data now becomes liberated between pipelines. And now you can actually use techniques that were once landlocked in specific departments now become available in other departments. So we have mixed application pipelines, which are very normal inside of all pipelines. We're finding that many more applications are now procedural and many custom pipelines have procedural primitives inside of the way that they work. So that when they do a payload, they can actually evaluate that payload at load time and then run some procedures on it. And it does minimize the need for caching within departments. And it does defer caching further down the pipe. So we were looking at that. We're going, yeah, that's pretty interesting. And the procedural recipe is only known to the pipeline tools because it's a part of that facility's tool set. So procedural, another example of successful proceduralism in pipelines is procedural shaders and pitted textures. And the rise of substance in the last couple of years, especially in film and games, of course, has really opened up the eyes and to a lot of these departments that have never experienced proceduralism before. And they may not have introduced Houdini at all, but introducing substance makes a lot of sense because it offers all the procedural advantages that a Houdini would. And it also bakes all your images out for you as well. So it's a pipeline in itself, very much like Houdini is. Other tools that are somewhat procedural, there's Maya, Soft-Amage, there's StudioMax, they do support proceduralism, but ultimately you are caching the results out for performance. So again, that proceduralism is not transportable, you're land-launched within your departments. And the data formats are not usually universally supported as well. So pipelines and approvals, very, very key to realize that it's this approval mechanism that determines whether an asset is a go or no go. And approvals these days are spread throughout the pipeline. You never know when an asset is gonna be invalidated and then forcing huge amounts of rework. And the worst thing that can happen is that this happens at the back end. So cached assets are approved in law, changes causes rework to local departments. And it's, you can see the stresses that are there. Now Houdini Engine. Houdini Engine is sometimes quite nebulous to a lot of, even our existing users and new users, they'll have a minimal install of engine, just a very small installation of engine to just support those specific Houdini tasks, most likely inside of FX and not too far outside of FX. And the majority of the use for Houdini Engine is guess what, to cache data, to support this cached pipeline. Now, if you have a mantra back end or you have a way of evaluating HDAs at render time and in your preferred render engine, you can actually load all of that work up at render time and then therefore alleviate yourself from caching. But there are very, very few pure Houdini pipelines at this point in time. So, but that is there. So the Houdini Engine plugin allows you to consume HDAs to work outside of FX in other third party applications outside of the current department. And that's the genius behind Houdini Engine. It uses digital assets as a way of transporting proceduralism well outside of your current department. And if you need to sell Houdini Engine inside of your facility, if you're a Houdini user and you're always bumping up against other departments, that's not something to be looked at, because if you have a visual effect of a smoke, smoke effect that just fills an area of a smoke and you can give that digital asset inside of a Katana user and they can digest that digital asset inside of Katana with an engine implementation, that lighter now can do work that would normally be in another department. And in for games, for instance, using level design tools as just a simple set dresser, my gosh, you've now got the power of environments right inside of your small population tools. And we're starting to see more and more of that use of engine, especially in the area of character effects. Character effects is seeing a great liberal use of engine inside of Maya and it's a great effect. So that's really cool and it works now. Few teething pains over the last, the first couple years but now it works, it's very robust and we're continuing to support engine and expand its use. I believe we added 3D Studio Max inside of Houdini 17. So great. So now we know that Houdini Digital Assets can liberate a lot of these workflows and tools that were currently landlocked inside of specific departments. So the biggest question of all is does proceduralism actually work? Yeah, I got that question many times in my years past but today I don't get it very much. And the reason why is because in visual effects Houdini absolutely does work. It's hard to not learn Houdini if you're gonna be doing visual effects in the film industry. And in games there's just some overlap but I would say if you're doing some a lot of enterprise effects work, yeah Houdini would be indispensable in games creation as well. And other areas where proceduralism actually works compositing with ice, shake, chalice and now nuke, subsets of substance, render procedurals to defer geometry creation to the actual rendering. Without that we wouldn't be able to render hair and some other procedural systems in the past. Although pipelines are getting much more robust now and the demands for for instance, hair simulation is requiring more and more of the guides to be rendered for instance and a lot of the visual effects work we're doing for environments, more and more of that environment has to be woken up so you're actually working with the real geometry. And so that's putting in a lot of stress on pipelines and again one of the answers would be to use digital assets is to be using procedural geometry creation at that time and it's really helpful to be able to do that at render time. So the Houdini Engine plugin works in a lot of third party apps as I said and it is very extensible and expandable with just no more than over a hundred lines of code and it's deeply embedded and it's wonderful to have Ken Chu on here with this team as well because they're also the developers and the maintainers of engine so later on if you have any engine questions as well certainly throw them at us but more related to PDG as well. So the negatives of proceduralism, there are negatives. It's very compute intensive to always regenerate the data all the time throughout your pipeline but that is slowly being eroded over time and one of the things that's helping out here is the sheer number of cores and processors that are available to us and up until PDG accessing all of that power required an extensive amount of energy on the facilities and the users to write and figure out how to chop up your work and then how to partition it and then how to execute that on your farms and careful of the words I choose because there's the exact same words that we use inside of the PDG. So the current distribution clients driven by proceduralism, it's really somewhat dumb in that they work linearly and if you think ROPs, you're thinking that linear stack so we do have ROPs, render output drivers which allow us to schedule the output or the execution of specific steps inside of Houdini and if you wrap in some loop holes such as UNIX plugins or the UNIX SOP or some other SOPs that you can use to sort of quietly grab some external resources you can do that as well, but it's not very well and not very elegant in the way that you do. So what do you, and another negative with proceduralism is what do you actually approve? The answer is a recipe that creates the asset. So if you're not actually approving the asset itself you're approving the recipe that creates the asset that's really important to recognize. So in the terms of a digital asset, what are you approving? You're approving the current version of the asset along with its current parameter state and those two things are easily grabbed from the current shot and easily replicated in other parts of the pipeline. So that's pretty much what you're approving and it doesn't hurt. I know a lot of pipelines who do use digital assets they will generate caches as well as a safety, as a safety margin. So when you do approve the asset they'll actually take a snapshot of the geometry as well or whatever it is that this asset data type is creating. And you'll also know again the PDG, every node by default has its cache format as data to disk. So the PDG is actually very well suited to address this negative aspect of proceduralism as well. Now the integrity of the tool that reads the procedural recipe is the most key point of all. And I need to stress this because I did support for many, many years and I'd argue really still do it today. If Houdini fails and if you try and open up a scene file today that was milled two or three years ago with Houdini today, we're going to get a support request to fix something. And we take that deadly seriously. So keeping asset integrity in place is very, very important to side effects. So that means that if you approve the recipe that really creates the asset, that tool that reads that asset better be bulletproof. It has to be bulletproof. It has to have come with a great amount of support. And up until today, a large of the large facilities would only rely or trust their own system administrators. And we'll get into that in a bit as to what system administrators because most of them up until this point don't even know how to fire up Houdini. And even though their visual effects teams have been using Houdini for the last number of years. So proceduralism is necessary. Changing nature of pipelines, much more content. They need departments need to communicate more strategically and integration of character effects and effects in a combined pipeline is a reality. And there are more and more issues that we need to talk about. But these are the key ones as inspiration for when Canada's this team decided to work and deliver this PDG tool. And the most important about the PDG tool is its re-entrant. In other words, you can build a PDG description of a shot for instance. And two years from now, you could fire up that PDG graph. And just like Houdini opening up a scene file, the PDG graph will be able to recreate what it is that you created years before. So that is a critical part of being able to rely on procedural assets in your pipeline. And another thing about the PDG is that no longer as with engine is reliant on digital assets. Digital assets are a fantastic compressed way of delivering procedural geometry data and volumetric data and others assets relative to film and games. But the PDG also extends its use broadly to pretty much any application that you can bring into the graph and use in that graph to deliver your content. So dependency tracking in pipelines becomes a big issue as well. How much to change? When does a change happen? What departments does it affect? How to minimize the compute? Do you just regenerate all the caches? Which is still the way that a lot of Houdini effects pipelines work. If there's a change to rework, sometimes it's just easier to completely regenerate the whole job than to just change one part of it. With the PDG, that all changes of course. Now existing tools also solve specific issues. And we've seen these tools in the past as inspiration. There's been Temerity, Southpaw with Tactic, Shotgun and GitHub. So there's a lot of tools that have grown up through the years to support existing pipelines. To try and track these dependencies and keep the integrity of pipelines in check. And then of course the custom studio tools which we run into a lot. So what is side effects position on pipelines? Pretty simple, Houdini is a pipeline tool in itself. And that was born out of a necessity actually. Traditionally Houdini teams grew organically in the shadow of existing cached based pipelines. So a Houdini effects team might have 10 to 20 artists working in a facility. But they worked completely independent to the rest of the pipeline. The pros to that was the Houdini teams could flourish and use proceduralism as liberally as they wanted to within that domain of effects. The cons are, well we'll get into the cons in a bit. But Houdini is extremely low maintenance on internal systems. So that means it's not getting much love either from the system administrators and from the pipelines themselves. So Houdini grew into being a pipeline in amongst itself. And that's why we did Houdini digital assets. That's why we continue to refactor Houdini many times over to try and make it the best darn procedural tool possible to support these effects teams and what I would consider to be hostile environments where the only thing they get is data in and then cash data out. So we adopted many ways to support this today from daily builds to really good support. And now you're starting to think, well why did we do that? The answer is proceduralism really requires the application that reads it and generates it to be very stable if it's to grow and flourish. And we've taken that very seriously. So our positions on pipelines, key issue is caching for us. We see caching between departments as roadblocks or impediments into free or flowing of assets through pipelines. And it also landlocks applications generating that data to those departments. And we're seeing a demand for those applications to leave those departments into other departments to compress time and also to compress the amount of cash data or defer the caching of the data later on. So also import of custom data formats is critical for us and also exporting of custom data formats. In the recent years, we've embraced open source, Alembic open PDB, and of course, we're looking heavily at USD as well and other open source formats as well. And all 2Dini pipelines do work. So if you have a facility that has 2Dini going from the beginning to the end, you're getting all the advantages of proceduralism and shared assets and shared tools across all your pipelines. It's somewhat rare at this time and it's more due to historical issues rather than anything else. Could be something to do with costs in the past as well. But we've addressed all of those issues moving forward or as many as I think we have. And then FX pipelines with 2Dini are procedural, absolutely. And lighting pipelines are procedural with Katana Clarisse and of course, 2Dini. And environments, we're keying specifically on environments as well, that we see them as a key target for proceduralism. And it's no surprise that all of these entries are definitely in the realm of PDG, especially the environments part because with a PDG you can divide and conquer easily to build massive environments and you can paralyze large vast swaths of your environmental creation. And that means you can pack a lot more in the given memory that you have on your hardware. And dividing the tasks across a farm is more than possible now. And who did the engine? Engine comes into play a lot. I've been doing a lot of discussions with customers recently and there's some confusion as to, they've started playing with the PDG. They're really excited by it, but they don't know how it's gonna implicate their, or how it's going to affect their pipelines. They're saying, well, I need licensing, do I need pilot PDG? And the answer is, grow organically. Houdini 17.5 ships with engine, it ships with a PDG and same with core and engine can run it as well. So if you have engine licenses currently in FX that are processing data, you can absolutely use FX to, and those engine licenses and just simply port ROPS workflow directly into TOPS. And the only downside with that, as we'll see in a minute, is just supporting your render farms. So Houdini Engine also gives you the ability to procedurally crunch data on compute farms. So that thing that we heard about crunch by using digital assets, sometimes at the last minute, as we've seen with some recent examples in the last year, even introducing procedures and very late into the game can minimize the amount of crunch you need to do in the last months leading up to your delivery. And render farms and dispatchers are well integrated inside of Engine as well. We even have our own HQ, which works quite well with Engine. And ROPS are the key technology. So Engine can be used to distribute farm tasks, IFD, RIB, ASS files and USD files, generate geometry caches, Alembic open VDB volumes, BGO, USD and photogrammetry data even, for instance, generating image casts as games uses and our own games tools absolutely are completely supported inside of Engine, of course. And then there's also distributed simulations. So one of the things that Engine can do is script driven processes, which is constructing files on the fly. So we want to populate short-term shot directories from temp files, for instance, or create and manage digital assets, construct meant materials from shader nodes so on and so forth, or utilizing digital assets as tools inside of pipelines. So we've got a time-monitor tradition of using one or two SOPs as grease inside of very complex pipelines that might be pushing data through many different DCCs. Let's say the normals are working right or there's a specific tool that is not available in any other DCCs or custom tools, they'll slip a little Houdini SOP in there and put that into the pipeline. And Engine makes that trivial these days. And that timer tradition goes back 25 years as long as I've been working at side effects, I've seen that sleight of hand being used. A file conversions, tip images for rat, custom plugins, and more importantly, creating assets. So generating image data, for instance, with chops with input data, similar to what substance is doing, but marrying that with geometry as well, which we can do easily inside of Houdini. And all of that now is designed inside of the PDG. So you can see how we pushed Engine to the limit. And facilities were pushing Engine to the limit. And it's complex to set up. We know that because most studios have a minimal installation of Engine. So we know that this procedural use of assets is minimized to just a subset of machines. And what we wanna do is take this production story timeline and this could be applied to anything you want in film. So we need to be able to just apply this is the production-based workflow that we would love to support inside of film, whereby you only wanna work on a key shot in every sequence. And then capture that recipe for the key shot. And then populate that automatically to the rest of the shot sequence. And then you do your work. There's some tools that do this very well. For instance, Katana has a very well-evolved workflow to this as well as does Houdini. A lot of lighting pipelines have augmented Houdini to support this sort of workflow. I've been involved in several feature films that have done this as well. So we know Engine is up to the task, but the problem is it's difficult. So we have a whole suite of Engine plugins and cloud-based interface tools as well. But ROPs are at the end of their line right now, especially with PDG. ROPs can cache anything in Houdini, but it's a linear execution, no dependency tracking, really dumb. And attempts at improving ROPs are honorable, but fall well short of what is actually required to do the real work of micromanaging a lot of our assets and dependencies within pipelines. So we got lots and lots of feedback over many, many years how to improve that. And the answer was for all the reasons that I talked about, we had to build this thing called the PDG, to basically liberate proceduralism to all other parts of the pipeline and beyond. So the PDG was conceived, a new network designed to take in Houdini's proceduralism and apply it to any data, task, processor, or application. And each task has its own dependencies and it extends far beyond Houdini. It was designed with a really simple Python interface to extend the various nodes inside of Houdini easily. And it also supports Houdini very well. It's a first class citizen when we're dealing with Houdini data types. For instance, digital assets with the HDA processor top and the ability to read in attributes directly from geometry and then bring those attributes right inside of the PG graph and use those attributes to make decisions further on down the line. It's very well supported, very, very powerful. And that's what I said, if you're a facility that is concerned about having to purchase more software, the answer is at first no, because engine with Houdini tasks powered by the PDG can consume those engine licenses and make the most of those engine licenses on your farm. And it also defines what each person at the Department of Facility defines as a pipeline as well. So it gives a lot more clarity into the platform, the procedural platform for pipelines. In other words, it's a guiding light. So if you're a facility and you're not in the effects department, you're not exposed to Houdini on a day-to-day basis, the PDG can give you a much clearer look as to how a procedural pipeline might look in any department in your facility. And yes, the PDG does have tools to describe end-to-end processes and workflows within a facility. So it can finally give you a really clear idea of how to sketch out a procedural recipe or workflow within a pipeline. So, and we are working on continuing on developing that even further. So PDG within Houdini easily paralyze tasks with any node or network type. It liberates ROPs into a proper framework. So if you want to get an early win with the PDG, just move your ROP workflow over to talks. Wedging is fully supported. As a matter of fact, it's on steroids inside of TOPS. And it works like SOPS, kinda. But we try to, we continue on improving and trying to improve the workflow within the PDG to make it much easier to work. For instance, you can use middle mouse on any task to get information. We use the same sort of time tested and proven UI workflows in Houdini inside of TOPS as well. So, pilot PDG is also a new application that we shipped inside of Houdini 17.5. And it's a standalone PDG editor, works with virtually any application and it's expanded with pipeline. I was talking with Chris just this morning and which will be in the, which is on the moderator. Just asking some stupid questions. I'm sure, you know, how can I run Houdini's command line tools? Because I know they're wholly underutilized. And then the answer is I started peeling in just a few minutes before we started this webinar. I'm going, oh, geez, it's really easy. It's easier than I thought. It's just basically implemented with Python. So now I'm looking and getting excited about tearing through and building, maybe get some prototype TOPS to going as wrappers around all of our image. You know, all the tools inside of HB be great to have those tools expanded inside of the PDG as well. But if you want to expand the PDG into your pipeline and support your tools, there's enough examples in there for you to build your own Python based implementation inside of TOPS to support those as well. And we do have a growing list of supported applications as well. Image magic tools, photoshop, substance, shotgun per force and a lot more. And one thing I do have to stress is that the forum, the PDG forum is an active way to start discussions. And we look at those at a very regular basis. And we're always looking at ways to shore up some of the holes inside of the PDG. And there's some quick wins that we can have and some other things will take a lot longer to do. So PDG studio notes, these are some notes for studios to carry on forward. As you're working forward is, these are things to remember. PDG ships with effects and core. It's in 17.5. Use it. There's a lot of tutorials on it. As I said, it's a great replacement for ROPS. Now the caveat is your scheduler needs to support TOPS. So if you're using deadline, tractor, HQ, you have a pretty seamless way to simply move over from ROPS to TOPS. But if you have your own custom scheduler or using another application, there's examples in there to quickly take that and support ROPS inside of your, or TOPS to support inside of your scheduling pipeline on your farms. It also works well within effects departments right out of the box. So you can finally implement a proper shop-based effects workflow. I know a lot of facilities have written a lot of tools to support a shop-based workflow on top of ROPS and on top of Houdini itself. But the nice thing about the PDG is like Houdini itself is, if you migrate those tools and skills now to the PDG and you're hiring more of users into your team who are versed in the PDG as opposed to your custom tool sets, they'll be off and running even faster than if they had to learn your own tools. So there's a lot of incentives to investigate and implementing PDG inside of your effects departments, especially for shop-based workflows. And of course, wedging. There's a couple of different ways to do wedging. And there's a couple of videos on that as well online. And if you guys want to do a future Loom on Wedging, I can show you basically where that I've adopted to use wedging, which I think is awesome, where I just use, I don't even use Houdini, I just use Assets now. Assets and I use PDG, I use just use attributes on the tasks themselves to drive how the wedging works. It's really, really elegant and it's all in situ, you're all in the PDG network. So another thing is to describe end-to-end department pipelines. So big pipeline and department changes will be required there though. Because once you start to engage the PDG to do the entire pipeline, you have to buy into this. And because it's going to replace or augment a lot of your existing tools of your large facility. But if you're a small facility that currently does not have anything like the PDG in your pipeline, which is a large number of studios that are probably, and a lot of users that are on this today, you now have a way forward to build a pipeline dependency graph. And with that pipeline dependency graph, you can track changes, you can schedule work, you can build really nice, elegant pipelines and keep on reworking them. And if you fuel them with Houdini digital assets and engine, you now can have parts of your departments now feed data to other departments. And then you can start to collapse pipelines in certain areas where skills and available talent is available for you to take advantage of. So I'm almost up by 45 minutes to pass over to Ken and his team and all the questions I'm sure you people have. There are tutorials side effects, please watch them all. Even the ones that you're a film, even watch the game ones because they do show you a lot of the intricacies of how to visualize top data inside of various applications, including Houdini's own viewport. And there's also a growing list of third party tutorials. I'm seeing almost every day now, there's a new PDG tutorial or PDG example on either of the many, many different social media that's these days that I have to follow. But I enjoy following by the way. There's also the PDG forum. So if you start getting into implementation, questions and issues, the PDG forum on side effects.com is an excellent place to go. And most importantly of all, this is the thing that if you're a Houdini user that's just starting to get into tops right now, I can tell you it's no different than learning solves or extending your knowledge of a network type into another part of Houdini. It comes down to the nodes. Once you understand the basic framework of how to wire a couple of tops together and understand how attributes flow and how dependencies work, guess what, it's now time to roll up your sleeves and go through every single top in that network and start to expand your language. And that means going to the help cards, looking at the example files. There's a lot of really well documented example files in there. And because you got to remember with tops, there could be a lot of potential duplication. For instance, what's the easiest way for to create a sequence of directories on disk? I've seen two examples so far that require a lot of Python work. But I've got an example that can actually do everything just by typing in attributes at the top node in the PDG. And then you just run it through a 4H block and then you can quickly create all your directories without writing a single stitch of Python code. So it's just like Houdini. There's so many ways to do the same thing. But I'm lazy, so I'm always desperately looking for the most simplest and elegant way to do work. And that's what's driving my excitement with PDG right now is taking my 50 node PDG graph and see if I can get it down to 10 nodes and making things a lot more elegant. So there's a lot of, so each one of the tops can have multiple behaviors and uses these days. So be wary of that. So you need to go to the help card. You need to take a look at the example file, study them, make them work. And if they break, definitely talk to us. So where to start? It's a blank slate just like SOPS. The only way to learn is prepand examples or master to top notes. It's just like Houdini. And of course it helps if you actually have a real need or necessity to use these. And for instance, if you're in effects, the real need or necessity is let's migrate from, let's migrate from ROPS to TOPS, done. Let's see what it takes and get there. That could be your task. And then get your pipeline up and running. And trust me that you will see a vast improvement in the way in which you can deliver effects shots if you move from ROPS to TOPS. And it's just a generalized network that ships with a decent amount of notes. That means it's been well underdeveloped for over a year for, you know, in its almost final state. And there you go. So that completes my quick run-through of, you know, the design of pipelines, how engine eventually evolved into this, and finally how PDG came out of all the needs of engine and of pipelines themselves. And it really was an opportunity for us to take proceduralism and to spread it much, much further than Houdini. And we haven't even talked about cloud. We haven't even talked about machine learning opportunities with the PDG graph. We haven't talked about expressing the PDG for things well beyond film and games. I mean, it could even be used to do tasks beyond that. And it has complete integration to common data formats within pipelines as well, which includes CSV data. We have Python, PyQT, all kinds of other data formats applied as well. And it's also extensible and extendable. So if you have your own representation of data within your pipeline, you can take the PDG and wrap it around that data. And that's it for my quick brief introduction to PDG. I tried to differentiate this from pretty much every other PDG presentation that's out there to give you really clear ideas to why this thing actually exists and the problems that exist and how pipelines are struggling and how proceduralism in general is a really great way. But remember, proceduralism needs to be backed up with a really strong and supported procedural tool. And that's it. So I guess we'll open up to questions and the team. Thanks, Trev. Ken, Taylor, and Chris, do you want to also join us? Flip on your cams and... Yeah, okay. So I'm trying to, well, you can hear me, right? Yeah. Okay, sounds good. Chris, can you allow me to start my video? Yeah, go ahead. Sil says that post has stopped it. I'll try it one more time. Okay, it doesn't matter. I'll just maybe talk a little bit and try to enable the camera later. Oh, there it is. Yeah, okay, so just to follow up a little bit what Jeff said, if I had to kind of summarize everything for you, it comes to what we call the 3Rs, okay? Because of PDG's application of pipeline. The 3R stands for robustness, reviewing context, and repeatability, okay? By robustness, we mean that if somebody makes a change upstream, let's say you have a model of a cup and you make the handle bigger, how do you know that, how do you make sure that everything downstream from that, let's say the animation that has this cup tipping over, maybe now it's, because the handle is bigger, is interpenetrating with the table, so maybe now the animation has to be changed. Maybe there's fluids spilling out of this cup, such that you want to make sure that the effects is being redone with that. How do you ensure that level of robustness, such that these things don't get missed, right? So by having a procedural pipeline, we can ensure that level of robustness in a pipeline. Secondly, the reviewing context, right? So we'd say you have some change, maybe you modeled something and you put a new model in, how do you know that this change is in fact good in all the ways that this model can appear in shots or multiple shots? Maybe there is some shot where the camera is in such a weird angle that you may be seeing the back of this thing that you didn't end up modeling. How can you just, based on the fact that you've created the new model of this, how do you get the latest everything except with your chain, right? Your latest model with the latest lighting, latest animation, latest everything, produced renders of this thing around all the places where you can see this thing. So it's such that you can review in context and make sure your change is in fact good, is in fact not gonna break the pixels, the final pixels, right? And then lastly, this idea of repeatability. Maybe few years down the road, somebody says, you know what, we wanna do some dust. We should do the dust we did on Lion King, but what you mean by that is you don't actually want the dust on Lion King because that is, at 1080p is yellow, it's setting after, maybe you're doing avatar now and it's supposed to be green or some other planet. You need to be able to repeat the process that created that dust in Lion King. And that process today is not being written down and checked in as an asset. So by describing the process by which you created these assets, the process can become more repeatable. The process that created the asset is itself an asset that you can check in and you can version. So those are the three key reasons around PDG. So do you guys have questions? Now would be a good time to, you know, throw it at us and we'll do our best to answer. Go on question from Nicholas. Is it possible to cook from the command line? Would be really cool if I could close Houdini once I start cooking a task, basically have a separate cooking mentor app. Yeah, so we do have a full API around this and it's in fact possible to trigger cooks just from Python, let's say. And our teller, Chris, do you have other additional thoughts on that? I think we'll pull it up, API. Oh, yeah. It's basically everything. Oh, okay. Yeah, so tell us over here, you can pull up some APIs to maybe throw into the chat. But yeah, that's basically it around that. Can you talk more about your plans for PDG slash engine on the cloud? In the games industry, we're interested in leveraging Houdini at runtime and the cloud seems like a good fit for that. Yeah, so ever since we developed Houdini engine, we have thought about runtime has never been far from our minds. And even though today PDG is, what we're talking about pipelines around that, in fact, it can be a foundation of this technology to enable us to relate the technical foundations for creating a runtime technology. So already today, PDG is agnostic of the scheduler. So we have all of these schedulers today, like HQ, Tractor, Devline, which we support, and you can find any scheduler you want to the technology. But with that is the ability to actually go to the cloud. So for example, you could leverage one of the existing schedulers ability to go to the cloud. And because we bind to the scheduler, that just happens almost transparently. But if you want a more native cloud binding with PDG, that is certainly possible to do with our scheduler interface. And that is very much the long-term direction we intend to take with this technology. Does that hopefully that answers it? Others? Think so. Another question, just confirming here, when I set frame range in our top rock fetch, it overrides my mantra frame range, right? So frame range, right? So I can basically just ignore the mantra frame range. There are multiple nodes that require, that requires to use the frame range and explanation of what takes precedence will be appreciated. Yeah. Tell us how's your... Actually, yeah. Taylor's audio is not working so well. It's a good thing we're very much next to each other. So I'm gonna just kind of swing over to Taylor here. So for the frame range, the tops always take priority. So it shows up on the top rock fetch will override the actual rock. Great, thanks. As open CL get canceled when simulating with PDG, I was wondering, because I can grab the concept of hedging in on CPU. I'm not sure how it works with GPU is open CL detrimental to using PDG? It shouldn't really be any different. It should work just fine. Will PDG rendering be more optimized for GPU? I tested Redshift and it's not so robust as a CPU renderer. I'm sorry, we're playing as a... Swapping heads once back. One more time please, what was the other? Will PDG rendering be more optimized for GPU? I tested Redshift and it's not so robust as a CPU renderer. Right, okay, so that's just spawning for Redshift. We're just spawning up another process. So that is really agnostic PDG itself. In time, this idea of natively writing a kernel in PDG and then utilizing your local GPU as a kind of a farm, that's on the world map, but it's not something that's urgently on the world map, let's say. When will Flipbook be supported? Were there plans for it? Flipbook, how like... Flipbook output. Oh, we already have the FFM peg, right? Like a crystal movie, but I assume this is some other sort of third-party sort of application that you can create, right? If that's the case, then it would be a matter of either you utilize an API to write something like that yourself or if there's sufficient demand from the community as a whole, if we hear a lot of that required we would potentially consider adding something along that line. PDG offer any benefits for slicing Flip simulations over standard Houdini? If so, how? I'm not going to get slicing working with Houdini in deadline and I'm just curious if PDG offers any help for this task. Yeah, I'll answer the very first part of this. I think the rest of it's going to be killer. So yes, we do support, distributed simulations and the benefit is that as each slice is being done, we can unblock the corresponding sort of frame quickly downstream. So I think... You can keep the heads up a little bit. Yeah, okay. So the other benefit of using it with... I think it's okay. With PDG is that because PDG is scheduler agnostic we can schedule distributed sims on deadline, attractor or locally. Whereas before it was tricky to run Houdini distributed sims without using HQ. So it's no longer tied to HQ. It's one of the, I'd say the larger benefits of it. Okay. So I guess that's one... I'm seeing these questions here. Is there a way to write stuff from cops that affects Houdini globally? We tried to query shotgun for data set environment very quickly. We tried to query shotgun for data, set environment variables too, but these only seem to propagate down the PDG graph. I think Chris might be able to answer that one. Sorry, could you repeat the question? Okay. Is there a way to write stuff from cops that affects Houdini globally? We tried to query shotgun for data to set environment variables too, but these only seem to propagate down the PDG graph. I don't think it... Because we're setting environment variables to basically the jobs that we're spawning. Yeah, maybe you can speak to that. Actually, I don't quite understand the question. You're talking about environment variables and shotgun? Yeah, I think my reading of that is, okay, so we tried to query shotgun for data to set environment variables to Houdini, but these only... They're trying to just modify the current session of Houdini's environment variable. Right. We know the environment variables we set, I think we propagate those to the farm as we spin up tasks or jobs there. But... Yeah, that's right. The environment variables on the work item are for the out-of-process environment in which that job is executed. Right, but if we, you know, let's say have a Python script node that we're running in process and we set some environment variables there, we could absolutely do it there. But if you want to do that, you have to do it in a way that PDG cooks parallel. So you have to deal with the fact that you're modifying a shared global state from multiple places. Right. A Python script node could be used to set variables in the current Houdini section if you wanted to. Okay, so hopefully that handles that. So try the Python script node if you want to modify your own session of PDG or Houdini. Okay. Can you give us a few game studios that will be using PDG networks in production? Due to NDA issues, we cannot name names. But yes, people are using PDG in production. I'll quickly mention maybe one that sent an email to me that said, yeah, we refied this and it's been a game changer for us. Unfortunately, you know, the rest of the email has five more paragraphs that I'll involve us doing work. So... Well, we'll leave it there. But yes, it makes a big difference because of, you know, what we show that at the launch. These abilities are really bringing it up to a larger scale, capture all of these dependencies. It's Houdini Engine really at the next level, you know, proceduralism. You just move a world curve and a bunch of stuff just happens. As opposed to chasing down these couple of hundred instances of Houdini Engine assets and try to modify those kind of things. Would there be any reality capture nodes? So interestingly, at one point when we were doing our sort of internal demos and early, early, early demos, reality capture was one of these things that we were testing out the procedural pipeline around. So I guess the... And I think our game scene actually is doing something with Alice, which I believe is the one that... because I want to see this screen. So in terms of reality capture, we're going to be investing most in the community. If there's a lot of requests around that, we would then consider adding support for that. But yeah, so we'll leave it there. Any other... Okay, the last one. I thought we're trying to use Houdini Engine as a runtime solution currently only for editor. Yes, we essentially have this question almost every time we talk about Houdini Engine. It's very much, very much in our mind. You can kind of see if you will some of these problems that we're solving is laying the technical foundations for such a thing. It's a big thing. So it's not something that we can immediately get to, but you can see we're moving in this direction. Good, I think that's it for now. Questions on the chat? No. Not here. Good, well, thanks guys. Thanks, Jeff, for taking us through the high level, the why and the what for PDG and Ken and your team for jumping in on some of these collections. As Jeff mentioned, we'd like to do a more tutorial format on the next webinar, specifically on wedging and probably some stuff beyond that after that. Absolutely. I would say in a month or so, yeah, Jeff? I'm off next week, but the week after we could actually do some really early ones. I actually had a few example files here to run through. Machine I'm on right now, it doesn't run Houdini though. But yeah, there's I want to start off by doing what I consider to be the basics of being able to run anything that we want, use any attributes we want, be able to marry different sorts of data. Basically all the workflows that we're very familiar with SOPs, see how we can recreate them inside of the PDG with as much granularity as we want. And eventually replacing that SOP framework with pure PDG graphs as well for some examples, so yeah. And there are quite a few examples of PDG tutorials on site at vex.com. So if you haven't seen those, go have a look there. Kenny Lambers did something with PDG and Unity, those together and Maric Schwind from Antagma did a wedging tutorial. So both of those are available on site at vex.com. Okay, thanks again everyone. Thanks for all the attendees having joined us and talk to you soon.