 OK, hi, everyone. Just getting acquainted with this. OK, no speaker's notes. That's fine. But I do have a timer, so I don't need this. OK, so back to my title slide. So I'm going to talk to you about faster product creation and fashion enterprise with serverless. What that actually means is we worked for an American shoe company that's been in business for several decades. And with my team, we've worked over the past couple years on how to help them make new products faster, bring them to market faster. And for that, we've leveraged Blender. That's why I'm here. And this little bit quirky part is that we've leveraged Blender on serverless services. I will explain what that means a little later. I will talk a lot about Amazon Web Services, maybe not that much. Just keep in mind that everything I tell you about Amazon Web Services in this context, you could also leverage other cloud platforms to do that. Mostly, that would be Azure Functions or the replacement for Lambda Functions. Or Google Cloud Platform also has their own way of doing it. So that would be possible just in this context. We were working with Amazon Web Services. So I'll start by talking to you about the way at that company. Until recently, they were designing new shoes. So imagine you're a shoe designer and you want to create a new product for an upcoming season. You happen to be attending a wonderful Blender conference because that's something you like. And you meet somebody who has a pretty cool pair of lavender shoes, and you're like, hey, I like this style. Maybe I would like to create a new variant of that product for next year. So this is how you would do it the old way. You go into your shared folder somewhere in your network. You try to find the vector drawings for that shoe. You load them into probably a couple of views, like side view, top view, maybe other sides. You load that into your 2D drawing favorite application. And you start picking the color because you really like that color. So you apply that color to the different parts into every view. And now you're like, actually, you know what? Why don't we create a limited edition for next year's Blender conference? And we add a Blender logo on those shoes. And that starts getting really tricky because you're like, OK, I can put it on the top view, but what would it look like on the side view? You're not so sure. But you like the idea. You like it so much that you want to take it to your boss and you want to pitch it. Hey, why don't we create a limited edition for next year? And now it starts getting very expensive because if your boss likes it, you're going to start going through the physical prototyping process where you're going to create a more technical description of that concept you had in mind. And you're going to ask a factory somewhere around the globe to create that product physically and then send it back to you. So it's going to take months. And it's going to cost thousands and thousands of dollars. So you really want to be sure that that's a good concept. But it's really hard to visualize, to imagine what this is going to actually look like. So you really have to be very intuitive about it. So there has to be a better way, right? So now let's look at the new way, what we implemented for that company to create new shoes from now on. So there's a little bit of blender here, but this is a website. The shoe designer connects to the website. They load a blank 3D model created by a 3D artist of that specific shoe. And you have access to a palette of colors, all sorts of materials. So you apply colors to different parts of the shoe as you see fit. You can look around it. You can add graphics just the same way in that 2D drawing application. You could load any drawing. In this one, you can also upload a drawing and SVG file or PNG. Apply it to a certain part of the shoe. And you can play around with it. You can change the aspect ratio, rotation position. You can get crazy and create like a psychedelic blender pattern. You never know. I like that. But I'm not a shoe designer. Don't hold your breath. It might not happen next year. But let's say you like that. Now you go see your boss. And they're like, oh, OK, now I see what you meant. And you actually, because you had this real-time feedback and it was not such a pain to place a graphic on different views, you actually had better ideas. You iterate more. You iterate faster. You can show your boss something. They can give you feedback like, no, it's too small or maybe you should space those logos out. Because right now I can't even see it's a Blender logo anymore. So the fact that you have a better view, like fosters collaboration, and already improves the process a lot. If you're wondering, the technology we're using on this part of the website for 3D viewing is the SketchFab viewer. Maybe there are some SketchFab people in the room. I know there were SketchFab people in the room not long ago. So you probably all know SketchFab for the content library, but also for the viewer that's online. And they have a commercial product that companies can buy to embed it in their software. So it's very optimized. It looks really good. It's quite performant, even on lower-end devices, which means that our shoe designers here, they can just log into, let's say, this COVID and they are working from home. And maybe they don't have such an amazing workstation. This works really smoothly. So now you're wondering, like, what about Blender? So there was actually, oh, sorry. Oh, this is going to be painful. There you go. There's a Save button at the top. When the shoe designer clicks Save, Blender kicks in. We're going to trigger with Cycles in the back end a bunch of renderings of the shoe to give you a much more photorealistic look. A much more photorealistic feel of what that shoe will actually look like. So it's one more step further. You had this three of you could play around with. And now you have the real feel of, hey, depending on how we like it, you see how the material reacts, you get just a better feel of what the end product might look like. So maybe you can get more additional feedback if you have shiny materials. They have libraries and libraries of materials that are very accurate scans of the fabrics that are actually available in the factories. So they can load those high fidelity scans, apply them to different parts of the shoe, and get as realistic as we can a view of the shoe using those materials. Whereas in the web 3D viewer, maybe some materials would not be rendered as accurately. So we render one view and different views. And for each version, let's say we call that a shoe project. Like today, I met this person. I like their shoe. I create a new shoe project. And I have several versions of that project. You can render many versions and the same sets of views for all your projects. What this enables is now at the company level, you can very easily browse and look at all the projects that your whole design team has in progress and have a general, like a global view of what's happening in the heads of your designers, basically. And because it's consistent, it makes it really easy to navigate. But kind of like accidentally what this enabled is we have these very nice pictures of shoes. And what we would usually do at that company is create physical samples like prototypes, expensive ones of the shoes that are selected for the upcoming collection. You make them, ship them to your designer's office. And then you make pictures of these shoes. And out of those pictures, you create catalogs. And those catalogs are going to be used by the sales team to sell the shoes to distributors. Now you have 3D renderings. You can skip all that. You can decide, these are the shoes we want. Let's generate that catalog already. And maybe let's get feedback from our potential buyers off of a completely virtual catalog to see if there are actually any products they might like more than others before we start prototyping. So really having this adding blender to the loop allows for use cases that we didn't have before. So let's get a little bit technical. How do you get that shoe exactly? From when that designer clicked the Save button, we have a JSON representation of the configuration of the shoe, what we call the configuration of the shoe. So it's a set of parameters that basically says, this material is on this part of the shoe. This graphic is here, position like this, et cetera, et cetera. The Lambda function grabs those parameters. It then loads a container image. So it's a Docker container. It's a way of packaging applications to allow them to run almost anywhere on servers, so in the back end. There happens to be a very helpful container image of Docker that is maintained by the R&D team at the New York Times Go figure. And that's available to anyone to try. So that's what we use. So the Lambda function takes this image of Docker. It loads into it a blender scene containing that shoe. And with a Python script, we apply the configuration to the shoe, and we start rendering. And then we output that shoe and store it in the cloud and the Lambda function dies. Why did we use serverless? Because usually, when you think about rendering 3D things in the cloud, you think, well, render farm, big machines. This is what a typical load profile of our servers look like during a normal work day of the design team. Most of the time, nothing happens. And we're not rendering movies. The scenes we're rendering are pretty simple. But we have spiky loads. So at some point, we're going to need to render six views of one shoe. Maybe there are going to be two designers who saved projects at the same time. So we have very spiky loads where we need a lot of computing power at one point. And then five minutes later, we're not going to need anything for the next eight hours. So we chose serverless because it scales to zero. This is really important to remember. You don't pay for anything if you don't use it. And that can be very valuable. But also, it starts very fast. Like, we could say, OK, we have a render farm. And outside of work hours, we shut it down. Or during the lunch break, we shut it down. But when you want to spin up instances, they take a minute to start. But it takes our Lambda function a minute to just render the image. So the big advantage of Lambda functions here is that they start in just an instant. Basically, between the moment when the user clicks Save and the moment when Blender starts rendering, it just has only one second passes. You click Save. Amazon Web Services provides you with however as many lambdas as you need. Loads Blender starts Blender, loads the scene file, and starts rendering in just one second. So it's really, really blazing fast. There's also no maintenance because there are no machines that you manage. You're not going to have to upgrade the Linux version or apply a security patch. I don't know, check if you have malware. There's none of that because you just tell Amazon, hey, run this compute task. I don't care how you do it, just get it done. And they do that. It's also quite easy to set up. Initially, when we were given this task, we thought, oh, my god, yeah, we feel like we can render a scene with Blender in the cloud. I happened to know, oh, yeah, there's the Vpy, the Python API, for Blender. And surely we can do a lot of automation with that. But does it mean we're going to have to deploy a server farm for this, like a render farm or a big machine? And when we realized we could actually do that with Lambda Functions, what a big advantage is that, in just a few files, and I'll show you a little bit more about that, in just a few files, you can describe this workflow to say, when someone does this, load this container image, run this script, save the output over there, and shut down. It's also highly scalable. If you need 100 Lambda Functions in one instant, they will be there. And it's very reliable. If it ever fails, it will retry instantly and succeed. So Lambda Functions are really a painless way of building highly scalable and reliable systems. But more importantly, why we picked that is that it's powerful enough for our use case, which is really a niche use case. And I attended presentations where you render movies. I would not recommend rendering a movie on this. So this is how we built it. Basically, there's four files. I'm not going to walk through them too much. But I'll just name them. So there's the Docker file. That's the red arrow. It says, go grab that image that's maintained by the New York Times. And that's what we're going to use to execute our code. There's a serverless.yaml file. It's leveraging a framework, open source framework, called the serverless framework, which allows you to describe by code serverless architectures in the cloud. It's insanely, insanely valuable and helpful. And basically, we say, hey, I want Lambda Function with this amount of memory. I wanted to have access to some cloud storage over there. I wanted to run this command when it's executed, et cetera. And then we have two Python scripts. One that just says, hey, when the Lambda Function wakes up, just make it run BPY with this script, so the Blender API in headless mode with this script. And then that's the last script, script.py. That's where we say, and that's where you would add all the logic you want to do automation. Here, we're doing rendering, but you could do, like I said, like we're applying a configuration to a model. You could generate new shapes with, I don't know, geometry nodes. Anything you can do with BPY. You could do at that stage. If you want to try this at home or at work, which I encourage you to do, just search with your favorite search engine, Solace Blender. The first result should be just a nice recipe that I wrote where you can see all the code and play with it. Why you may not want to try this at home? This is really not for everyone. This is not a render form. If you were to try and render a full movie using this, you would go bankrupt. You have to bear in mind that serverless are a very smart allocation of compute resources sharing between all the Amazon users. Many, like with all cloud providers, even if you have servers in your basement, most machines around the world do nothing most of the time. One way to solve that problem is to say, hey, we're going to rent virtual machines that run on actual machines, and we're going to increase the overall usage of compute power around the world. And the next step is to say, we use serverless functions where we share, actually, even more customers share the same pool of resources. The trade-off is that one second of compute on serverless costs orders of magnitude more than one second of compute on a machine you would have under your desk. It's just that if you don't use that machine all the time, you're paying for nothing. So with Lambda functions, you only pay for what you use, but what you use is actually more expensive. So if you need compute power all the time, like rendering a movie over two weeks, don't do that. Very bad idea. So that's about it. I would be happy to discuss it after this presentation with all of you. Just a few words about myself. So I'm Jean-Rémy Baudoin, or JR. I live in the US. No one gets Jean-Rémy if you live in the US. I'm French. I live in New York City. I grew up doing a lot of 3D. Somehow got into web development instead. But I end up doing 3D, which is pretty nice. And I work for Theodore. We're a software agency. We build web applications. When I say that, people think websites. We do build websites for big names. But a lot of what we do is actually invisible to the public. We build a lot of applications that have very few users, like the designers at that company. We're based out of Paris, London, New York, and we're about 600 people. And that's about it. Thank you very much.