 Hello! It's great to see so many people here. It's been an amazing conference so far, and I'm incredibly grateful that I get to be here. This is actually my first Blender conference, so it's been a lot of fun. And I'm really excited to be here to share a little bit about some of the production pipelines that we've been using for interactive platforms. So Philip McCluskey and myself, we are the two MoGraph people at Vectraform, now known as launched by NTTData. So we're just the two MoGraph people. We have a larger team of great designers and an even larger team of brilliant engineers and developers. But it's just the two of us doing a lot of the tech art and all of the technical direction and that sort of thing. So this talk is really going to be about what are some of the production pipelines that we're using in a very small team? They're often like we are the sole tech artist on a project, bridging the gap between design and engineering, which is particularly important for things like interactive platforms. So you can see a little bit of the work that we've done in the past. Anything from like WebGL to extended reality experiences like augmented mixed and virtual reality, mobile apps, everything in between. Once we've done installation work, which is always just incredibly fun to build something that merges physical and digital properties, it's a lot of fun. So like I said, we work with an innovation team, essentially. We handle things from ideation through final deployment, which is really cool because it means that we are fully integrated the entire way. So we will work with teams that are working with clients to define the problems that they need to solve in the first place, and we will work with the people actually implementing it for real for clients, customers, and everyone else. So exactly what does a small team need where we really focus on being very strategic and very innovative with our projects, but it's still a very small team. So for me, I think it's kind of two things that we're primarily looking for. We're looking for expanded capabilities, and we're looking for greater efficiency. The greater efficiency, you know, if I can make the computer do the work for me, I will. I will spend plenty of time just so I can spend less time in the future. And so that greater efficiency is going to be really important just so we can keep on top of deadlines, especially for anything that's going to be repeated in the future when we have contracts with clients where we are continuing updates over many, many years for a platform. It's really important that that's efficient. And then, of course, expanded capabilities. When we're a small team, we're looking for anything that will give us a new technique or a new approach, anything that's going to help us do something that we couldn't do before. And so those are the two things that I'm looking for when we are building out a pipeline. And so to do that, I'm going to be walking through a few case studies. So we have some concrete examples of how we have begun implementing Blender into more and more of our interactive projects. In the past, we've always had a lot of flexibility. And team members have been based in Maya or Lightwave, a lot of Cinema 4D. We still do a bunch of 3D animation, 2D animation and all of that. So our team has used a lot of Cinema 4D in the past. In 2018, I was kind of in the middle of moving some of our team over to Modo just because it was a strong modeler and it was really helpful for a lot of our real-time projects where we are sending things to Unreal Engine or to Unity for game development. But there was one project that kind of we got a little bit stuck on. And in 2019, I think, Microsoft came to us looking for a template project for their newly released HoloLens 2 hardware. They had a bunch of new experiences, new interaction models that they needed to demonstrate. And they wanted a demo project for developers to be able to download, check it out in Unity, see how it was built, kind of see how all of these new, more physical interactions were being done. And so they came to us to help build that template project. And with the HoloLens 2, like you started having more direct interaction and that sort of thing where you could grab things and push things, it was a lot more direct interaction than the HoloLens 1. So this was a mixed reality project. We got a bunch of CAD assets from the client. Those had to be converted. We used Pixies for that. Not all of that into Modo. And the rigging was just completely insane. I'm sure anyone who's worked with CAD files before, unless they are specifically built for use in real time, it's going to be a disaster. And so in this case, like, joints for the robot were just scattered around the room at random angles. And no matter what I tried, I could not get that fixed in Modo while also preserving the normals. It was weird. I'm sure it was just a problem with me, not Modo, but couldn't figure it out. And with the Blender 2.8 update, things were getting to be a little bit more accessible for people like me who had never really used Blender before. And it turns out that using the join operator, I could set up a bunch of empty dummy objects, merge the incorrect objects into the new dummy objects, and I would have everything fully rigged, ready to go, and that actually worked pretty well. So this is the first time that we actually started implementing Blender into our pipeline. So this was kind of like the first little drop that quickly grew into other projects. So in Blender, we were able to process things, then we sent them back to Modo for final processing. Modo works really well with FBX files, so that was really handy, brought everything into Unity, and completed the project that way. I think on social media, I called this one of the silliest pipelines I'd ever done, going from Modo into Blender, back to Modo again. But it's whatever gets the job done, and we do that sometimes. Sometimes one tool isn't enough, and we'll combine multiple ones, but that was kind of the start. Unfortunately, Philip McCloskey was not able to be here today, so he will be presenting this case study, but just to kind of give you a brief introduction. Stain Steel is a configurator of custom outfits. They build a bunch of custom elements that are configurable and customizable, and their entire process was based on pen and paper. It was a terrible process. I mean, it worked for them, but it was just very slow and difficult to customize further. Like, they would have presets, and it was just a huge pain to figure out, like, well, does this rack actually fit with this preconfigured customization in this specific van model, that sort of thing? So they came to our team looking for something that was going to be far more modern, dynamic, real-time 3D, and allow their engineers in the actual autobase to build out a custom configuration completely on their own and know that it was actually going to work. So without further ado, I will go ahead and pass this over to Philip. That would explain it. Thank you very much. Philip McCluskey, one of the senior motion graphics artists here at Launch by NTC Data. Unfortunately, I can't be there in person with you guys, but you're in great hands with John Eindland. Now I'll be walking through how I utilize Blender in my workflow for processing client CAD models and getting them cleaned up and ready for deployment on the vehicle configurator that is an iOS app for Adrian Steel. Without further ado, let's just jump into it. We are back over in Blender, and what I'm going to go ahead and do is use one of my favorite features of Blender, having worked with Maya for years and years and years, but just went with the quality of life things that I love about Blender is the Quick Favorites manual, which I'm sure a lot of you are very familiar with, but that default hot key of Q brings up my Quick Favorites, and I have in here a lot of them, just the tool sets that I use, the actions that I use for this particular pipeline. And the first one we have on here that I'm going to use is the PLY importer. So I'm going to go ahead and just like that. The reason we're using a PLY file is what we get from our client. We've worked extensively with their engineering team to come up with a file type as well as model prep that works well with this pipeline so that we can quickly process their models and get them into the app as quickly as we can because when they have new designs, they want them up there available to their customers immediately. In working with their engineering team, we came up with that file format that works well from CAD conversion to Blender as well as their engineering team. They're building the CAD files for manufacturing. So they have holes in them for manufacturing purposes, folds, bends, and screws, nuts, bolts, all that stuff is in the models. And we don't need all that. What we basically need is we just need a low poly mesh version of the model as a proxy for in the vehicle so the customer knows what it looks like and how it functions. But we don't need all that, all their data. So we worked with them to remove all that access data. But sometimes things squeak through and that's where I come in. The first thing that I do is I go into edit mode and back to the quick favorites. And when I open that, you'll notice that it's a different set of things here. So those that are not aware of how quick favorites works is it's dependent upon what mode you're in. So if I go back to object mode, I get those import options and a few other things. But when I'm back into edit mode, I have some of the ones that I use the most. So we'll go ahead and select all and we're going to go to normals. And we're going to just flatten those out. I don't need to do that. The app doesn't handle normals. However, I just like to flatten out all the normals. It's easier to identify issues that may have arisen somewhere between them exporting it or things that just maybe have slipped through the cracks and I need to take care of. With this mesh, I know that there is some internal geometry that I need to take care of. So we're just going to go ahead and select that so we can zoom in real quick there. I'm going to go into vertex mode and I just want all those. Let me re-select that one more time. I want all those and I want those. Done. So when I come back and look at this, it didn't change anything with the silhouette of the model, but we've removed quite a bit of internal faces that just we don't need for the app. Let's just optimize in the model. So that's just an example of some of the mesh cleanup that we have to do with these models. So I'll go ahead and just copy the product name from here and I'll go back to my quick favorites here and we'll go ahead and we'll save this out as a collada file. Collada file is just what we use for the app. Go ahead and export that out. Then we'll just take that over into Maya and I'm going to go ahead and import in that collada file. I'll just export it out and I know it's already the correct size and I have that name already saved right now. So I'm just going to rename that as the mesh product number and we'll then drop that into the node system that we use. This can be done in Blender, but I already have the hierarchy set up in here. But what I really need out of here for this particular application because the app uses a legacy material library, which was made in an RMS program. So I'm just going to go ahead and assign it this metal aluminum cast, which is... And from there, I just have to go ahead and export this selection and to back to that collada format. I'm going to load that collada file over into my thumbnail template that I already have created in Blender with the appropriate backdrop, materials, camera, and lighting scenario that matches all of the already pre-rendered objects. So I just import that in and I need to scale that down just a little bit to fit in my frame here and bring it up into this optically middle. And then assign my thumbnail gray, which matches all of the other renders that I've already done. And then with that object selected, I'm going to come over here to my output. And John Einzlin has written a lot of really nice plugins for us. And in this case, I'm using his autosave images. And I already have it mostly set up here. So if I just go up into that file there, and you can see that there's just the ply file and the collada file in here. I'm going to click to accept. That's just going to be the render location. He has a nice little cheat sheet of variables list. So in this instance, we're using the item selection. So I can come down here to file name. He's got a few already presets. And I'm going to use the custom string here. And I've set the custom string to the item from our variable list. And I want it in the PNG format with a transparency available. And then all I have to do is make sure that object selected, which I did previously. I'm going to hit F12 for render image. Wait for that to finish up. Now it's done. Just go ahead and click escape. And then I can come back in here and show that in that previous folder where we just had that collada, the ply file, we now have a PNG, which is the thumbnail that we rendered for that product. And it's a great tool that we utilize often when we're doing with dozens and dozens of these products in A2, just in the naming of the file and the rendering. It has just saved us a bunch of time. I know John's working on a batch rendering system, which is still in the development stages, but we hope to have that implemented soon. So with that, I'll hand it back over to John Einzlin. Thank you very much. All right. Thank you so much, Phillip. Again, sorry he couldn't be here today, I really appreciate him recording that. He's been the lead tech artist on that project for a while now, processing a lot of different assets. And it's a good example of where it's like a limited amount of automation. We're not like trying to automate a system where you drop a file into a folder and everything is done for you. It still requires some artistic input, but it's something that like some small tools, small pipeline details can help speed that up. Now with the next case study and another one towards the end, I'm going to talk very general. We have a lot of clients with NDA and we want to be very, very careful not to leak any proprietary stuff. This was a product customizer where they had a number of consumer products that they would customize, and they needed a way to show this online, show people different variations of a product, show people customizations of a product, and do all of that in real time. So this was a 3JS project working with the engineering team, and a big part of this was just a lot of like remodeling or model cleanup, that sort of thing. And so there were a number of very small tools that we ended up creating to kind of help with that process. There were parts of this process that were fully automated. For example, in Photoshop we had a bunch of source textures that we were using, and we set it up so that way each texture had an associated resolution. Whenever the Photoshop file was saved, the system would auto detect that, compress it to a PNG file, scale it down to the desired resolution, and then save that into the source images folder for Blender. So that was a really nice automation that made it a lot faster to iterate. You could make changes in Photoshop, save the file, go over into Blender, and update images, and everything was good to go, without having to load Photoshop files in Blender that were way larger than they actually needed to be. Another tool was Radial Offset, which was just a very quick Python plug-in. So that way we could easily radially scale things without changing bevel dimensions. So you could select a beveled edge and then shift that radially in and out without changing any of the other modeling that is in that selection, which was really helpful as we were creating variations of different products. Me being the person that I am, this is very much a my-personality thing, and this is not required, but I got really frustrated trying to use project from view and fit to UV bounds and needing to have an exact bounding to that area because we were matching print files and all of that. So I created a planar UV projection plug-in that would allow for numeric control of everything, which was pretty helpful. It also meant that everything was repeatable. Finally, as we are constantly reloading images and loading new images and all of that, we needed a quick way to reformat everything. So the VF Update Images plug-in allows you to define like just file formats that are imported with specific settings. So you can customize the color space and the alpha settings and all of that. So that way, if it is like dash color at the end of the file, that is going to load as sRGB. If it is dash normal, it will load it as non-color data. That just sped up the workflow a lot. And then, of course, swapping texture sets. We manually compressed everything because we were delivering on web and we needed a generic asset with elements that would load very quickly and so we had very low resolution, highly compressed, like just blank textures for most of the products. And then those textures would be replaced on an as-needed basis online. But to do that, we needed to be able to really manually handle all of that JPEG compression because the default JPEG compression in the GLB exporter, as wonderful as that exporter is, it supports compression. Critical for online delivery. The JPEG compression was very low quality for the sizes it was producing. So we had to compress all of those outside of Blender. But then, like that was a huge pain to have to do and manually make sure everything is compressed correctly but not torn up, that sort of thing. So being able to swap images. And this is just a find and replace. It's super simple. And in this case, it is taking the PNG and the JPEG file extensions and just swapping them and then it reloads all the images. So we just had PNG files and JPEG files next to each other. The JPEG ones were for delivery online and the PNG files were for testing internally and as we were like furthering development of all of the textures and shaders. And so that allowed us to switch between two completely different sets of textures instantly. Finally, of course, anything that needed to be posed we're trying to keep as editable as possible. So all of the bands were modeled flat for various watch models and then those were posed just using spline deformation. Worked really well. And then finally we had VF delivery, which was a plugin. All it does is set up the GLB export exactly how we need it. Makes it easily repeatable and it automatically names the file. So that way I can select multiple items. Whatever item is active, that's the name of the exported file. I've already set my directory. All I have to do is click export and it saves out that file. If there's an existing file, it replaces it. And that way I can just synchronize that folder with the rest of the engineering team and they always have up-to-date files. The GLB export is great, but I wanted to make sure that I wasn't accidentally forgetting to check or uncheck something because I will make mistakes just constantly. So to make things me-proof, I felt like this was a really helpful system. And then of course, once it's been exported, we can load that in the 3JS viewer. All of the textures are there, ready to go. And so this is kind of the in-depth workflow that we used inside of Blender, ignoring all of the stuff outside of Blender. So we had a lot of like modeling and UV tool sets that we used. Bake a node isn't something that we've done internally. That's a plugin that you can find on various websites. And it's really helpful. It just lets you select a node in the shader, click bake and it will render it out to a file. I use that constantly for occlusion maps and stuff like that. Then of course, we use the GLB exporter supports additional data that the EV renderer, though it is still a raster-based render engine, just doesn't support. Like, EV doesn't have an occlusion map that you can use. Well, 3JS does, which is really helpful. We didn't even use any lighting in our final project. We just used an HDRI with occlusion maps and that was it. And so the GLB settings group was really helpful for that. Then we're swapping inputs for the images constantly and then outputting everything with the delivery plugin. And I will include a link at the end to the GitHub. These are just our internal tools. I make no promises that they're helpful for anyone else. But if you think they'll be helpful for you and your workflow and your pipelines, please go to GitHub, download them, they're all free. The next case study I want to share is Jeep Badge of Honor. And this is an account we've had for a while. It is a pretty cool project where Stellantis that owns the Jeep brand, they have kind of like a rewards or an award program for Jeep drivers in North America. If you are really into like driving through tough terrain and trails, there are trails throughout North America where you can drive your Jeep and when you complete the trail, you can earn a physical badge. They will literally ship you a little badge that you can affix to your vehicle or if you're spicy like some Jeep owners, you will like earn a bunch of badges and then you put them on your Toyota Prius because it's hilarious. But yeah, so this is, it's a cool program. And this past year we were working on adding just a little bit more animation and life to the interface. And that included things like just animations throughout, all of that was handled in After Effects, but we wanted something a lot cooler for when you first load the app. We also wanted something a lot cooler for when you actually earn a badge. We had a user interface that popped up and that was pretty much it. And we wanted something that was going to be a lot more dynamic. So for the opening screen, we use Lottie a lot. It's a vector animation platform that allows you to load animations very efficiently and play them back. The nice thing too is that it's rendered live. So you can animate it 25 or 30 frames per second and it'll render it whatever speed the phone or tablet or website can handle. And I really wanted to add a lot more depth because there were mountains in it and that sort of thing. I really wanted a camera move, but that's a pain to rig in After Effects and I didn't want to have to rig it out with a bunch of nulls with expressions pointing to vector points and it was going to be a pain. So I did go down the rabbit hole of trying to see if I could actually just get Blender to export a Lottie file and I failed horribly. But I still had all of that work started. I had the camera move that I wanted and so I just rendered that out and traced it in After Effects. Sometimes the stupid solution is a lot faster. The best part of course is that all of those points are contiguous which means that it's not trying to display a new vector image which would be limited to like 30 frames or 25 frames per second. This is all like vector animation so it'll play smoothly on any device. The other thing was like I mentioned before wanting to add a little bit more excitement to the earning of a badge. That's a really big celebration for the user and we wanted to create something unique, interesting, and my goal was I wanted something that was going to be unique for every single badge. But we needed to do that efficiently because there were well over 50 trails. They add new ones every few months. So in came geometry nodes. And using some super basic base geometry along with some instances and a fair bit of math. I was able to create kind of like this dirt cloud explosion. We explored some other concepts as well. There were some other animations done including some vector animations and stuff like that. But I felt like this was a really fun direction to explore. So with this completely procedural animation where everything is driven by random values and noise we were able to then set up two of them and animate them in based on whatever index of our batch render that we're doing. It will randomize like the directions that they fly in and the noise that's used to displace them and all of that. So every single one is actually a fully unique render. Completely consistent style and all of that. But every pattern is unique to the badge that's being rendered. And with that I mentioned batch rendering. For this project that's when I first started expanding the auto-save image plug-in to handle a lot more and in this case loading a folder of images, assigning that folder to a node in a specific material and then just rendering everything. Now Python blocks things and everything. So it will freeze the interface while you're rendering. But all of this was done in Eevee. It was pretty fast. Not a problem. Right now it's still not implemented in the app but it's something that we're hoping to implement in development coming because I know this was like at least internally this was our favorite approach. And the cool thing not only does it render a completely unique pattern for every single badge based on the input image. So that way we have all of that output pretty easily. All of that is automatically named. And all of that is automatically compiled into the compressed MP4 files that we would need to deliver on the server. So, yeah, it's a pretty easy system to use. Once it's been set up all we need to do is load a bunch of new designs and render it out. And a big part of that was being able to incorporate FFNPEG into Blender. And so we always render to images no matter what. We always render to image sequences. And then I'm constantly like recompiling that using FFNPEG because it has far superior compression to Adobe Media Encoder. And so in this case we were actually able to implement it directly into Blender. So when Blender completes a render and all of those images are ready to be compiled into an MOV or an MP4 file or whatever format we're using for delivery, we can actually do that straight from Blender before we even have to touch anything else. And it uses all of the same naming conventions. So in this case we have badge underscore and then node. And that takes the name of the image file that was input. Strips out the image file like format and just gives you the file name. So everything is automatically named and ready for delivery on the server ready to use. And finally we have some, another NDA client I want to be very careful. We do some work with automotive HMI design. A lot of automakers are looking for more and more real-time interfaces and effects and things like that. Designs have been kind of stagnated depending on the tools that are being used. And a lot of automakers are moving into more Unreal Engine or Unity Engine to actually drive the displays that you see in the vehicle. So as we are working with companies to build out systems and processes or even just like building out demos to say like this is what's possible in real-time. It looks cool maybe. You want to do more? So this is a generic sized version, a completely generic project but it kind of demonstrates some of the concepts that we're working with. We do a lot of custom shader development in Unity. This is built in Unity. We do that in Unreal Engine as well. We try to be fairly platform agnostic and work with a number of others as well. But this demo was just built out in Unity. So as you can see like there are particle effects. There are like dials that need to animate, atmospheric effects, things like that. So a lot of our geometry-based stuff we actually build now in geometry nodes. We will take a source curve and mesh that inside of geometry nodes. We will make sure that we can build out the UV map and all of that because that UV map is then used inside of Unity to control animations and reveal effects and all of that. So here this shows a custom shader in Unity and it's taking all of that geometry nodes, mesh, data, and then rendering that in Unity with customizable things where you can adjust the dial completion and you can animate things in with effects and stuff like that. But the first thing in this was a particle effect. And this was a more recent request that we had and I really wanted a good way of authoring it. The challenge we had is that all of the authoring systems were either Unity plugins where you had to position a bunch of arrows and like, okay, we want the particles to move this way now and it was like, we need to make a bunch of different icons and shapes and stuff like that. We need a better way to do this. Unfortunately, the better way was difficult to find. You can do pure math-based stuff in Python which is cool but not really art-directable as easily unless you're really good with mathematical formulas. And I come from the art side of things. I do my best, but maths can be a bit of a stretch. So the art-directable solution was to use Houdini. That was the only solution I could find. But thankfully, you can write binary files with Python. And so taking a source curve in Blender, generating a cubic grid of points allowed us to then in geometry nodes capture the distance to the curve and the tangent of the curve. So as a particle is further away from the curve, it will move towards it. The closer it is to the curve, it will move with the curve. And so that gave us the ability to then create a volume field that Unity can use for particle effects. And here you can see arrows indicating the direction of flow. Everything is normalized in this view. Typically, it's a negative one to one range. With simulation nodes, we can even simulate this directly in Blender so we can preview exactly what that curve is going to look like. All of this is just linked together. So as you edit the curve, everything else updates automatically. And then to export, we extended the delivery plugin to be able to write out the custom.vf binary format that Unity requires. And it's as simple as selecting that original volumetric grid and clicking export. That's pretty much it. Inside of Unity, you can see in the lower right-hand corner a volumetric preview of the file. And then as soon as we play that simulation with a specific, like, emitter pattern and that sort of thing, that's what it looks like. So surely there are other ways of building volume field files, but to the best of my knowledge, this is at least the first that I've been able to find in Blender. And it means we don't need Houdini. Which is bold, I know. Yeah, it's been a lot of fun. And you may have noticed a pattern where more and more of our projects are relying on Blender. We started off where we would start in Modo and move into Blender and then back to Modo. And then we were starting in Blender and then moving into Maya and then back to Blender. And more and more of our projects, we are building entirely in Blender. Both pre-rendered stuff for stylized 2D style graphics, but all based on 3D animation to more product visualization that's intended to be more photoreal, things like that. And for a lot of our real-time projects for interactive platforms, we're now using more and more Blender. Including this presentation itself, which was, of course, naturally all rendered in Blender. It's probably the simplest pipeline where I simply created a background that's completely procedural based on the launch branding, especially inspired by some of the shape studies by Ekstjen Miller. He's one of our designers, a fantastic designer. The entire team is absolutely incredible. I'm really blessed to work with all of them. So with a procedural background, then I just set up a collection for every slide that I wanted. And because I would far prefer to spend four hours building geometry nodes than four hours aligning every single spline by hand, I automated the splines as well because that sucks. And it's so much fun to just type new words and all the splines adjust to fit. And then, again, using batch rendering, I had already set it up so that we can batch render collections. So you just select a parent collection, and every collection inside of it will be toggled on and off and rendered. All of that works with the auto naming. So all of the names are saved out with the name of the collection when it's rendered. And that's pretty much it. So it does kind of leave us with a little question because we do have, you know, typically the freedom to kind of choose like what platforms we want to use. It's a small team. We can kind of define our own workflows, our own pipelines. So why blunder? Everyone else has a scripting API as well. And for me, it kind of boils down to two different things, accessibility and community. And I'm not talking about like true accessibility where it's, you know, legible for people with lower, like, eyesight. I wear very thick glasses. So this is coming for me, but accessibility in terms of it's free. It doesn't hurt to try. And that's what got us started on this journey was, well, I'm failing in Modo. Why don't I try blunder? And so it's that accessibility that really kind of got things started. And I think that makes blender a really attractive solution for, well, maybe this is a unique problem that blunder can solve a little bit more easily than something else. And then that snowballs into something bigger and before you know it, you're building presentations in blender. And the other thing is community. Having the energy of the community behind blunder, the momentum, people sharing things that they're building in blunder, people sharing plugins, things like that. It's what gives the product life and vitality. And so a big part of this is just a big thank you to all of you because I think it's a lot of the users who make blender great. So thank you. And yeah, I hope this kind of gives back to the community a little bit. If there's anything you can benefit from that we've been developing, all of our stuff is released on GitHub for free. So these aren't plugins that we're like offering like community support for. This is just our internal workflows. So if it works for you, that's awesome. If it's something that you can hack into something new and different and better, even better. So that's it. Thank you so much for your time. Yes. Yeah. I have a demo project set up. It's two different plugins right now. I have the plugin that generates the volumetric grid of points. That was built for a completely different project. Long time ago. A fully rendered thing. It just so happened that I had already built a tool that was going to help me in this. So I reordered like the XYZ generation. So that way it was going to fit unity, keeping in mind that blender is set up unity is why up. I'll try not to get into why unity is correct. But sorry. Again, I come from the art side, not architecture. Though my architect mother will hate me for this. Yeah. So there is a demo for that one where you can import a volume field from unity into blender. So you have all of that data into blender. And then the second plugin is VF delivery. And just again, I know we're undergoing acquisition. We're launched by NTT data now, but you'll still see VF branding pretty much everywhere. And the delivery plugin is what reads all of those values and then saves them out to the binary file format. So it's the use of both of those plugins. The import or the generation plugin includes an import demo. And then the delivery plugin includes a demo showing how you can actually set up the curves and that sort of thing. And you can like literally anything you can do in geometry nodes, you can save out to a vector field. So I've used it for just like doing a split. So like pushes particles to either side, but with random noise applied doesn't use any curves at all. So long as you have values in geometry nodes, that's all you need. So any other questions? Gotcha. So you're asking about like just development timelines in general from when we export a file to when we see the results. Generally, like once, and this was actually built in 3js, not Babylon. I'd like to try out Babylon on the next project because I'd like to play around with our online editor. But yeah, I mean, it took some time for the engineering department to like actually build out some of the system because they're working with a bunch of back end stuff. 98% or more of the project was a lot of back end stuff. It wasn't actually building out the visualizer. Once the visualizer was incorporated, it could be done in like an hour or something because all we're doing is exporting a file. Update stuff on their server, update stuff like references, if it's a different file name or whatever, make sure it's implemented and then we can view that. For me, I was just loading it in the 3js viewer online just to validate things. And that's the instant I can just import the file directly from my computer in blender. Yeah, we had used Pixies for the robotics project with Microsoft. We use Pixies for that. Pixies is okay, but the quality we were getting for like high decimation wasn't working for us. There were some challenges and so we ended up just going with more of an artist driven approach. In Philip's presentation, he's working directly with the CAD engineers. And so he's working directly with them so they can remove most of the stuff before they even export it in the first place, which is by far the best solution. And so there are just some pieces that are highly functional in CAD. And so he will do a little bit of manual cleanup, but it's much more of a quality check process than it is necessarily like remodeling significant portions of the data. Any other questions? Cool. I don't want to hold up our next presenter, so with that, I bid you farewell.