 So we're done, this is level 9 of 10, where the hips and legs and then the shoulders, neck, and head were created. And this is, I said level 9, well we can go up to level 10, which shows all those steps being performed in a single image. So just to give you a sense of what the interactiveness of this is. So here is our custom visualization of the sequence. So you can choose whatever camera angle you want. But then you can view the playback of the edits at 10 or 11, depending on which index you're using, levels of detail. At the lowest level, you see every single action that's performed. So you see every camera change, every move of vertex, and so on. You can play this back quickly, so this is exactly the same thing as a time lapse. But you can also go up level of detail and see a whole bunch of changes at once in just a single shot. Now I said you can choose which camera you're viewing, but you can also see it from the point of view of the artist, if you wanted to see exactly how they were working. And you can step through, choose from an extremely high level view, or get a little more details, a little more details. Down across the bottom are just a couple of snapshots at different points in timeline. And you can scroll through seeing which, you know, this is the same thing as like a video player, the little scroll bar of the video player. Let's see what else we have here. So another really cool feature is being able to filter down the timeline to certain operations. So here we're going to see just the extrude operator. So where the extrude was used, and you can filter down to any type of operation. But probably more interesting is being able to select or highlight part of the mesh, and then you can filter the timeline down to the edits that only affect this part right here. So here we'll see the arm is selected. So right along here are the points, the highlighted areas down the timeline are where the arm was being operated on, some sort of edit was happening there. So we could just get a view of what happened to the hand or the arm. Just as one other example here is a hydrant that was constructed. And then I'll show how the end cap right here was constructed in nine steps. So in the top left we have the cylinder was added, and then there's a series of loop cuts and insets, and so on that eventually came to the final cap here. So all these visual annotations again were done in the visualization system completely automatically. All the artists had to do was just record themselves at work. Okay, so before I move on to the next part, are there any questions? I'm just kind of rushing through some stuff. I can probably take one or two questions. The operators for extrude and annotation, do you have to, how does the system work? If you have relaxing vertices or bridging, many operations, how does that get annotated? Sure, so in our system here we just stuck to very basic operations, extrude and move and so on. But you could potentially extend this on to handle more sophisticated types of tools. And again, this was also focused on polygonal modeling. So this is not at all sculpting. Sculpting has an entirely different workflow. But we do have, and I'll mention this a little bit later, we did extend this paper to, or this technique to include sculpting as well. All right, if there are any questions I'll be happy to answer. Do you have one question? Do you include modifiers? No, this is just operations that are performed on the mesh. And I'll remember your question, I'll be certain to answer it afterwards. So I want to move on just to make sure I don't run past my time. So the second paper that I would like to talk about is Mesh Git. This is Diffing and Merging Meshes for Polygonal Modeling. This paper was published at SIGGRAPH 2013. So if I showed you these two meshes, they're just two snapshots, you have no idea what happened between them. Can you tell me what happened in between those two edits? The artist I'm certain is in the room, so he could probably tell you. But there's a lot of gross changes, gross isn't really obvious. Some new geometries being added, there's some features across the top, but there are also some very small fine features that could be very easily missed if you were just visually inspecting this. There's an edge loop that was actually existing in the previous, in the shoulder ball here, that was removed in this, that would be very easy to miss with visual inspection. Here's another example where the mesh on the right does not at first seem like it might be a derivative, but it actually is a derivative of the mesh you see on the left. And then here we have two really heavy meshes with lots of geometry where there's many different places where there's changes that happen, but finding every single one could be rather challenging. So we tackled this problem by taking a very similar approach to text diff where we defined a cost metric in text, in the text world. Every change, an addition, deletion, or change is one unit cost. Here we defined the cost to be matching elements of one mesh to the elements of the other. So how much does a cost match this face to this face, and this vertex to this vertex, and it keeps in mind the neighborhood as well. So we defined this cost metric, and then we developed an algorithm for basically going through and doing the assignment, doing the matching between these. Again, details I'm going to hand wave for now if you want some details, you can talk to me afterwards. But let me show you some of the results. So here are two different versions of the Sintel's head where this was early on in the process where they were trying to figure out what shape her face is going to look like. So I'm going to use this as a demonstration. So the first step in our algorithm goes through and finds any vertex that hasn't moved much, matches them up, and does the same thing with the faces, which in this case there are considerable changes that happen, so vertices that move quite a bit. So there's only a few that get matched up initially. But then we use that information and propagate this out. So we basically grow these patches out based on this algorithm and cost metric, and this continues to go around the entire face, and eventually we have a pretty good matching, but there are some issues with the ear. The ear are actually very similar to one another, but the way that our algorithm moves forward, it can hit a local minima if that makes any sense to you. So we have a backtracking step to remove those elements, and then we continue on until we get this mesh right here. So just to give you an idea of what you're looking at, the one on the left is what we consider the original. In this case it doesn't really matter, but any face that does not have a match in the other one is colored red, so this could be considered a deletion. And in the derivative mesh, you see on the right, we highlight unmatched faces in green, so this can be considered an addition. So you can see around the face there is an edge loop that was added, and then right across the collar bone as well. And then the faces are colored in blue to indicate how far a vertex was moved. So the gray here was not moved very much when compared to its original position, but then this deep blue, it's highly saturated blue, indicates lots of moving, so lots of sculpting and so on. So this is the general process, but it applies across many different meshes. So we went to BlendSwap, found a whole bunch of things. We pulled stuff from open source projects and used them. So here is the same creature mesh from before with all the changes highlighted. So you can see many of the features that were easily picked out are very obvious, but then you can see also the tongue had some sculpting done to it. It was moved around. There's lots of teeth that were added and so on. Here is the women from before as well. So you can see the dress actually matches up very well. It was sculpted to be more of a downed dress instead, and then the front was added in, and obviously the hair was removed and so on. And then here is the shuttle example with all the additions and changes being highlighted. One other example is the Durano open source project. So here is Pablo provided different snapshots of the mesh while he was working, and I took these meshes and just ran mesh get between each pair. So from the first to second, second to third and so on. And I can highlight different things in the sequence to indicate when geometry was deleted, when it was added, when it was added and then deleted again. So across this tummy here, there was a little inset that was added in. Some shaping done on the hips. Okay, but not only can we do diffs, we can also do three-way, not only can we do two-way diffs between two meshes, we can also do three-way diffs, and we can turn these changes into actual operations and merge them potentially. So over on the left, what we're seeing here is a shallon monk that down in the bottom on the center was the original mesh. And the one on the left here was sculpted to make him a little bit heavier. Then this original mesh was, so this is one edit, this is one derivative derivative A. And then the second derivative took the original mesh, removed the skirt and boots, and added sandals and feet. Now these two operations, according to some details in the paper, are non-conflicting. So you can do the movements of the lesions that they don't overlap. So we can actually take the edits that were done from the original to derivative A and the edits done from the original to derivative B and combine them automatically to create a new model that combines both sets of edits. But the cool thing about it is these are edits that are actually on the mesh, so it maintains all the original topology of the original edits. So we can apply subdivision surfaces that merge mesh and the results are what we expect. Okay, so I ran through a lot of stuff. I imagine there's lots of questions. But just to kind of summarize, mesh flow, we instrument a blender to record the artists at work and then we applied a clustering algorithm to summarize and then we visually annotated the mesh sequence to show the edits being done, doing that fully automatically. And then in Mesh Get, we took snapshots to find a metric of change and an algorithm to determine what edits happens. We didn't actually have the edits that happened between them. We just kind of made them up by our algorithm. And with this, we're able to visualize two-way, three-way diffs and do merging of edits. So I mentioned earlier that mesh flow only works on modeling. We extended this a little bit later to handle sculpting, which is kind of interesting. But all of this kind of went towards answering this one question of, okay, what if we had multiple people, multiple artists, independently working on the same thing? So you can think of the scenario of a student wanting to know how to model the spaceship. You have a set of students. How do you compare the different workflows of the students? So this is... We have some preliminary results. It's really pretty cool. But it's still ongoing. We'd like to get a lot more data. So if there are any modelers in here that would be interested in being part of some research, please come talk to me. Like I said, it's really pretty cool. And I think there's some really great educational tools that we can pull out of this research here. So I'd like to thank my research collaborators, a lot of artists. Many of you are here. Thank you very much for your data. Here is my contact information. And that is all. Thank you. First question about the mesh flow. Is it related to blender? Can it be used in any 3D application? And how does the artist do the snapshot? I mean, does he simply work and then he presses a button to screenshot how it works? So it's related to blender and that we actually instrumented blender itself. We went into the C code and in the code path where basically it hijacks the undo. So every time the undo functionality happens it will record what action was performed and then right out save the blend. So basically you save the undo to this? So we save it to a sequence of files, yes. And that's essentially what this later project was doing as well. So we're just recording what happens in there. So your other question was could this happen? Could we apply this to other modeling programs? Sure. If you can get a nice clean action and a snapshot of the mesh out of the system then it doesn't really matter if it's blender or Maya or 3ds Max. That doesn't matter. As long as you can get, like I said in the snapshot. If you know how to do that in Maya come talk to me. I had some trouble with that. One more. About the mesh git is it like you simply save two different blend files and then compare it in the mesh git? So you can think of this as in-git or SVN or some other repository. You have a model and you check it out and some other artists had done some modifications on that same file and tried to commit it. When SVN says it's a binary file it doesn't know how to merge the sets of edits. So yes, these are just snapshots of the model. It doesn't necessarily pertain to blender. We were actually exporting them as PLY files but you can use any data file. We were just looking at the geometry. So any file that stores geometric information. Thank you for your talk. I'm really amazed by the mesh flow system. It really reminds me of my beloved LEGO. With the construction manual it's kind of the same. I was wondering is there a way to add explanation from the artist to the thing because you don't have the audio because it's not really a movie or a video and I don't saw anything for text annotations or something. How does that work? Our research was just we didn't want to go down the path of actual tutorials because it's really challenging. So we wanted to tackle as simple of a problem as possible which meant just record the artist, don't worry about any annotations. Because on top of that you have what if they go up in summary do you need to provide a new type of annotation for that. Conceivably you could record the audio while the artist is working so he could be speaking while he's working just the same as a regular video tutorial and then just have this as a supplementary addition to it or play back the audio while you're playing back the video. So you can certainly extend this to handle all sorts of different annotations we just didn't do that. Yes. Do you record the time when the edit is finished so one can sync it with the audio? Yeah. We're saving out the file so you can just look at the file when that happens. You can also look at, that could be part of the action log when you record stuff. I mean with the under steps that you record. You can record the times and then play back in actual artist time as well. Wonderful. How feasible would it be to take the mesh where you showed with the monk where you basically combined the two different edits into a new mesh. How feasible would it be to actually take a series of commits say you're building a hand and each commit you add a different feature or a different change and then actually generate a whole bunch of new models based on variations of the model. Sure, so one example I didn't show was we had a spaceship that had conflicting edits and this is very similar to that where we took one model and then we added a bunch of features and then we took the original model and added a bunch of features and they happened to overlap and our merging algorithm detects these conflicts and allows the user to decide how to resolve these conflicts just like SVN or Git would do. So presumably you could do this conceptually you could do the same thing with a sequence as well so you can turn on and off edits. In fact, well, yes. You can certainly turn on and off edits. So at first I thought Jonathan was going to ask my question so my question would be I think you kind of answered it already also regarding the different functionality that this is not currently usable as a tool so basically if I actually were to commit my blend files into a Git or SVN it's not like I could use this as a diff tool for comparing the files. Well, yeah, I don't have it. I haven't written the code to load a blend file but if you had that piece then yes you could do that. So that's the only missing component. Yes. And also the merging. Well, we have we wrote a visualization system to do that but you could conceivably do all this in Blender as well so you can have some sort of connection with like SVN or Git you can configure it where if there's a conflict of edits on a .blend file then call this tool and that could be in Blender or something else. So you can configure that. Yes. If it's the same as Johnson's if I saw the spaceship and my idea was if there are multiple non-conflict changes could or certainly I could but to generate a whole bunch of variations depending on the non-conflict changes so I have I could model five different variations and then I have 24 Yes. So all those features the algorithm goes and groups them together to try to granularize the edits so it's not just well there are changes that happen here and changes that happen here so I can't merge them but it actually groups them together into patches and then you can turn on and off these patches and it doesn't matter if they're overlapping it could just be here's a sequence and I want to spatially undo a feature that I added in so regardless of the time as long as there aren't any edits that happen afterwards you can undo spatial changes I really like your first bold thing do you already provide it as an album so can I use it for myself what is the impact on relatively small meshes so if you were then I don't know maybe 20,000 faces you don't notice it's like hitting control s essentially so if you notice a pause when you actually save your file then you'll notice a change in here we just took a very naive simple approach to how we instrument it so there may be their blender devs maybe we can optimize this a little bit do it multi-thread or something but presently I do have the instrumentation patch available it's for version 2.5 2.5.4 it's an older version of blender I haven't updated the patch in a while but you can download it and then merge it in yes it's actually modifying the C code of blender can you grab just the Python potentially yes just another question about merging and conflict handling I'm a developer I usually have a tool like KTF3 which in a conflict offers me do you want to line of file A or do you want to line from file B or do you want both from A and B would you be able to offer such a solution sure does that make sense on a mesh though yeah that's what I'm wondering that's my question from an artistic point of view does that actually make sense so we took the very conservative approach and said well you can either choose neither you can choose A you can choose B or we'll give you all the data and you can merge it by hand so that was our approach because we don't know what it means to merge geometry where it's actually intersecting because that seems to conflict with the artist's original intention I can imagine in the example that you gave where you pick up a guy in one edit and remove his shoes in the other it does make sense to fixing up his shoes I don't know if it would be possible to do something like that sure sure we also considered conflicting edits not to be in a spatial sense so removing the boots if he actually had feet underneath it which in this model he didn't but removing the boots is just an operation on the boots it's not on the actual character itself but we do consider if they overlap on the same mesh like the same there are lots of different variations you can go down this way we took the extremely conservative approach just to say here's what we can do is there a good documentation on how to set up mesh flow and record something and then turn it into a video and see arrows or something like me who hasn't used it at all there's very limited documentation it was just for the reviewers to say here's how you do it beyond that no I'm guilty of the documentation not getting written but if somebody is really interested I can sit down and write up here's step by step how to do that sort of thing if you have any more questions or details come and see me afterwards I think the next talk has already started so I'd like to end this