 Good morning. Talk title is a study of automating eye mask creation. I Thought about a couple of different titles for this. This one's okay. We will be talking about eye masks, but the important thing is really the study of automating and in particular I want to talk about blender wizards and creating wizards Inside a blender. I'm dr. John Denning here speaking on behalf of orange turbine So the target audience here For it for this talk is going to be a little bit toward artists a little bit toward add-on developers and a little bit toward Blender developers, so I'm hoping that each of you will be able to pick up at least just a little bit from this talk here So the case study for today is a company called elio labs. They're a health and beauty startup They sought out to try to prevent Eyebags and puffy eyes and the way they did this was to create a custom Fitted 3d printed eye mask the function of this is similar to like compression socks It's supposed to apply some pressure Light pressure in certain areas to prevent fluid from building up It consists of a soft flexible liner for comfort and then a rigid nylon frame for structure and then elastic strap to hold it on place and They had a process already in place to be able to produce these But they wanted to automate a lot of that process So that is where they contacted us to try to see what we can do from the Clients perspective they download an app. They scan their face following the directions it creates a 3d model of their face and that model gets shipped to elio where a designer will take that information of the of the head and Do some processing to it to come up with two different meshes that get sent to the printers and then those Items get shipped to the customer. So here is one right here The original design document though consist of 28 different steps everything from importing the stl to positioning it at a certain Location potentially scaling it and then it was importing in other templates projecting it to the face Going into edit mode moving some vertices around deleting some Joining some meshes. It was it was very involved. I took a skilled artist or designer About 90 minutes to create one mask But all of those steps kind of broke down into kind of two categories. We have some design related steps These are things that are really specific to the actual design of the mask itself identifying and working relative to relatively to landmarks on the head It's like where are the eyes where's the bridge of the nose where the temples and so on and then they were making adjustments based on certain design constraints they needed a certain width For the material to be printable and rigid enough these type of things. So these are very specific to the Mask itself But then there were a lot of steps that were involved with just dealing with blender managing the modes going in and out of edit mode Adjusting settings filling with the UI selecting things changing the viewport adjusting modifiers to have a sufficient geometry count and that Sufficient could kind of change on where it was in the in the process Modifying the mesh to regularize it or reorient orient polygons But also to know they had to know a little bit about how modifiers work so that as they're adjusting the settings They can know how to How to optimize the setting to to where they're headed So there were some things that were very specific to the eye mask But then there was a lot of things that they were also having to do They were spending a lot of time on doing and this bottom part is where a lot of the training actually had to come in But we looked at that and we saw that their process actually had good properties of making it into a wizard An automated semi-automated process The the steps were broken down very clearly And they were very succinct so it was like do this in this step do this in this step The adjustments that they had to make were very well-defined and they were very precise so the Thickness of the the frame had to be a certain number of millimeters the human basically adjusted the a few parameters and Worked with proxies so especially around the side of the head there may be hair Involved and they didn't want to project to the hair so they would put in a curved surface as as a proxy and then Towards the front it would fit very well It projects very closely to the face, but then along the sides it would smooth out to the to the proxy And so basically they were working to guide the system along And the really the only kind of tricky part was just managing the software state so in other words Let the humans do what they're good at Identifying landmarks optimizing intuitive parameters making artistic decisions where it's necessary And then let the computers do what they're good at which is computation and managing data and states and let the human basically Guide that automation and that's kind of the key thing to making a process into a semi-automated or even fully-automated wizard so that's what we did we took those ideas those steps and Up here. There's a about the 28 so we're down to about 24 steps, but the important thing is the quads up there are all involving human interaction, so we have Selecting the STL file identifying the customer by their identification number Landmarking different places of the head so it's very very simple very clear-cut the user is involved The hexagons though those are the fully automated steps along the way so adding in modifiers Setting their parameters applying those modifiers Selecting faces and deleting and joining and and bridging all that good stuff So most of the stuff up there is a very simple user interaction And the rest of it is fully automated things now We do have some diamond-shaped quads up there those are mostly just confirmation steps So it was along the process. We wanted to well sometimes we had to but we wanted to make sure that the automated System did the right things and made the right decisions along the way and when needed we might expose a couple of parameters that Or a set of settings that they can select from so if it happened to not work out Well, they can click a button And change the parameters and rerun the step But the other cool thing about this is at any point along the way They can transition from one state to the next or go back So if they need to make some adjustments, they can always move back all the way back to the to the very beginning so allow them the flexibility to be able to kind of Make adjustments see the effects of it and then go back The other cool thing is the user interaction part so all of the little quads that are up there They're very basic the amount of training is very minimal to To get a person a designer up and going and that 90 minutes that took a skilled designer Before that takes less than five minutes for just a very lightly trained individual So here's a couple of screenshots of the tool Here the head is being Landmarked and the user the designer all they need to do is click on Different places so one of the ears the eyes tip of the nose the other eye and the other ear and in fact The wizard will even as you're clicking it will change the view to be the side or the front or the other side Depending on what it's looking for and then the user can move those points around if they happen to not hit it Very precisely, so you still had the control of being able to move the camera But it's just helping them along the way in the shot here. They're adjusting the proxy that is approximating the head This is especially around the hair so I can the projection can ignore Any of the geometry about the hair? But in particular this proxy is supposed to be very close Approximation of the head so there's gonna be a lot of it of Intersections and overlapping and so we needed to have both of them being transparent So you can see both inside and out however sometimes you might need to check that off And so this step here sets up all of the the materials To be transparent, but still exposing a couple of parameters up there little checkboxes for the artist the designer to To change as needed and then in this screenshot here. We have a very complicated set of Interacting pieces so the the frame itself is it consists of two different layers. There's a few few reasons for this But there's a lot of vertices involved in there and the previous process they were Turning on proportional editing and moving those vertices around and then guessing at How well it's going to project whereas here we've marked a few of the vertices using vertex groups As being control points with a falloff automatically defined So all the user needs to do is they don't even have to think about edit mode They just go and click on a vertex or one of the control points and move it and everything kind of moves along with it But it it moves it across multiple objects not just the one that you're currently working on So when it goes into cut holes for the liner to poke through so that the two pieces can Can stay together those holes as well as the little stubs that go through those holes are also affected by those control points so everything moves very well together and Here's one of the shots of the confirmations It is after the liner has gone through its processing stage Where the the front of it is protected to the face and then the sides which don't project very well We're deleted off and then a template edge is added Meshes or joins and bridged But sometimes it can have a few issues It didn't quite cut enough of the stuff so we expose a couple of parameters But for the most parts the user really doesn't need to adjust it It's mostly just a confirmation everything looks good go on to the next step so A few challenges that we ran into Every head is different They they had a really nice process that worked fairly well for a Average head, but they started getting more and more different shapes of heads The distances between eyes are different The profile of the nose and the and the and the brow The location of The cheekbones is very very different and it ended up breaking a lot of their A lot of their workflow And so we needed some ways to sort of replicate the issues that they were having We were working across the world people in china and and western US So we needed some ways to be able to have them submit their Their issues that they ran into and the way that we did this was to basically instrument everything So every action that the designer makes every decision that they make is being recorded somewhere along with having A copy of the mesh Through through each of the steps, but the cool thing about that is we can Load that into our Into the wizard as a little button and we can basically speed run to the point That they were running into because some of the process takes a little bit You have to click through some stuff and we wanted to see exactly what it was that the designer was running into So we can know how to fix it But also because it different heads broke the process Ilio needed to iterate over lots and lots of designs. I don't know exactly what design number they're on right now This one I think is close to like design five or six But that meant that our tool needed to be flexible enough to Be able to iterate as well They also tried out some different printers different materials working through material issues And like I said working through design Excuse me design weaknesses But the wizard actually helped them do that job That because we reduced that runtime from 90 down to about five minutes Basically just kind of clicking through and also with the instrumentation They were able to very quickly iterate over new new designs The other thing I forgot to mention Is the way that we implemented this Getting a little bit on the nerdy side is we implemented as a finite state machine which allowed us to Basically treat it as different states As we're as we're moving along But Because we modeled it in that particular way as a finite state machine It allowed us to be able to reorder Create new steps Delete steps very easily. So again Kind of going toward the the iteration Helping with the iteration Another challenge we ran into is the quality settings that you have for inspecting and making sure everything looks good and Evaluation are not interactive the subdivision levels were way high. We're doing lots of projection on high geometry So It was not feasible to grab a control point and move it around interactively so the The way we worked around this was to When the control point was clicked to reduce all of the Levels of the things that are less important for adjustments. So it's still a good approximation to what the end The end result will look like but it Runs at a much more interactive rate Another challenge we met we have lots and lots of assets. We have many different versions We have a whole bunch of different things that need to be shown to the designer Maybe not all the time just some of the time so we created a library file It's just a blend file with a with a whole bunch of things in it that contain everything that is needed for the For the designer to do their work But also in that it has all the meshes that they were working with Marked up in some particular way. We use vertex groups a lot To identify what what are going to be control points? What parts of the meshes are going to be bridged together? What is the upper half and the lower half? What's going to be the front side so that that gets a texture applied to it versus the back side? So it just allowed us to very quickly again be able to iterate very quickly But also we hand wrote a bunch of scripts a little hard to read here, but They prepared the objects for being used in the library because sometimes the objects may be Have a wrong origin So we need to be able to apply and adjust the origin Or remove modifiers or print out all the settings of the my modifiers as well and the other thing When we got the new design from elio It would have all the modifiers On it not applied quite yet so that we could be able to replicate what was going on But when you import in objects with modifiers, it tends to pull in everything else those modifiers are touching And it was a lot easier to Just strip away all those modifiers and then re add them in in code So we kept around the originals that had the modifiers still in there But the the version that gets imported in has Very very few of the modifiers maybe applying the mirror modifier, but that would be about it No, the challenge we ran into is undoing because we needed to be able to go forward, but also backward And undo in blender works pretty well Unless you're doing something weird like what we do So it was hard to know when an undo would get pushed on it was also very hard to control That whether to force an undo to happen or prevent an undo from happening But also undo only captures some of the data changes. It doesn't capture everything. This is a really hard problem It's again, we're doing something very weird So the way we worked around it was we just wrote our own undo system Which basically involves making a copy Of every instance of the change and we stashed it away in a hidden scene So over here you can see all the different backups that are in there But along with that we have all of the instrumentation details Some challenges that we also ran into that are not quite yet met. Well, the dependency graph It reevaluates very often Perhaps more often than than what it is if we're applying a bunch of operators Excuse me applying a bunch of modifiers to to an object It reevaluates after every application There was no way to say all right hold up. We're going to do a few more operations We don't really need to know what the current state is quite yet, but uh, let's just apply it through So what we uh What we ended up doing was we disabled the modifiers While we're applying so that they just would not be uh re applied every time But it was not it was not a great solution. So this is a really hard problem I'm not entirely sure how to solve this but It was something that we ran into The other thing we ran to which is a kind of another weird thing is the modal operator stack kind of a stack Was not really accessible. It was hard to tell Who or what modal operator had control? So if we're uh our Wizard ran as a modal operator But anytime we wanted to grab the model and move it around that was another modal operator It was hard to figure out when that gave us back control and Also the visualization changes With that so it may show the gizmo At times What we ended up doing was just ignoring as much as possible. We like it is what it is Or we ended up writing custom versions when we needed Needed to Another challenge that we ran to boolean modifiers can be really finicky, especially when you try to automate them In particular on the inside of the the frame. We have a customer identification that gets applied there So these are just digits that are unioned onto onto the frame But sometimes if the those digits didn't get positioned correctly then the union didn't work And geometry just kind of disappears and there's no way to tell that that operator actually failed according to how We're interested so a lot of those confirmation steps that we had in there some of them did have some Options to to be able to to tweak But mostly they were also in there to make sure that Everything worked fine that the operators didn't mess up all the digits that are supposed to be there are actually there also the ui system Is a little bit cumbersome Uh a little bit too cumbersome to work uh in a quick prototyping session So quickly iterating over different designs adding a new button Just takes a while Takes a lot of code and It's just it's just very cumbersome So what we ended up doing was basically writing our own system ui system that converts Markdown html a little bit of css and python And turns it into ui widgets that show up so allows us to very quickly iterate we can control very well How the overall ui looks? okay here's Basically my last slide. I had mentioned that the this talk was geared toward artists add-on developers As well as blender developers. Here's your takeaways artists A lot of the work that you are doing could potentially be automated or semi-automated um if you are able to Define very well what those operate uh adjustments and operations need to be and they're very clear like what is Uh the value for for those things um Talk with the developer. Maybe maybe we can automate. Maybe it's not going to be a one button push To to solve it, but it can get you get you there pretty quickly um And especially if you can limit the amount of information that you need to provide And let the computer do its job add-on developers Try to design for iteration It's going to be part of it even if the customer comes to you and says this is the final version. It's never the final version So try to build that into your system Also designed for replay because a lot of times the customer is going to run into problems that you need to recreate and see how you can be able to fix it So instrument and replays is key and then blender developers. Thank you for blender applications. That's coming I'm very excited to see what we can do with that Um, I think already blender is very capable of doing some really cool stuff like what we've done Um, it's very weird to traditional blender stuff But uh, but it can be done. There are still a few challenges. I'd mentioned there are a few more Um, happy to talk with you about that, but uh great work And thank you