 Hey, hello everyone. I'm Michael Wellman, Chair of Computer Science and Engineering here at University of Michigan. And I'm pleased to announce our speaker today. Paul DeBevek is a 1992 alumnus of our Computer Engineering and Mathematics Undergraduate Programs and is our 2023 College of Engineering Alumni Merit Award winner in Computer Science and Engineering. Paul's groundbreaking work in image-based modeling, rendering, and high dynamic range imaging has greatly influenced the field of computer graphics and changed what is possible in the production of photo-real visual effects for the film and television industry. While in undergrad at Michigan, Paul figured out a new 3D modeling process that allowed him to build a textured 3D of his car from photographs. This led to his PhD work at Berkeley, where he extended the technique to create a sweeping cinematography of the Berkeley campus. He then developed a new technique to light virtual objects, which is now a staple in the production of today's visual effects. Paul extended this work by developing techniques to light real actors in virtual environments, to provide actors with clear images of the virtual environments they are acting in, and for the use of LED lighting to illuminate actors on virtual production stages. Paul's work has been used in productions, including The Matrix, Avatar, Blade Runner 2049, The Hobbit, Free Guy, The Mandalorian, and many others. I think it's safe to say that we've all enjoyed productions that have been made possible by Paul's work. In recognition of his numerous achievements and extraordinary impact on movie making, Paul has been awarded two Academy Awards for Scientific and Technical Achievement, the Progress Medal from the Society of Motion Picture and Television Engineers, and in 2022, the Charles F. Jenkins Lifetime Achievement Emmy Award. And now, please join me in welcoming Paul DeBevic back to Michigan for his lecture today. Thank you so much, Michael. You put this microphone on. It is great to be back at Michigan. I think the last time we got to go back to 1992 for when I graduated, this is, I think right after I graduated in Michigan, I was briefly stopping back home in Champaign-Urbana and then heading out to grad school at the University of California at Berkeley. I'm saying goodbye to my grandma and my dog. And I'm actually driving that car that got mentioned. Coming to Michigan was an amazing experience. It actually exposed me to unexpected things that were very influential in my career. I was actually already interested in computer graphics as a young person kind of watching visual effects movies, seeing things like Tron and The Last Starfighter that had early versions of CGI. And when I found out you can actually go to college and do computer science, that was awesome. And I hope to do computer graphics. Michigan did have kind of a basic computer graphics class. I learned a couple of things, but the most influential class that I had at the University of Michigan was taught by Professor Ramesh Jane, who had a computer vision class. I had no idea what computer vision was. I read it in the printed syllabus. It sounded interesting. And I was actually of the many accomplishments that he's made at Michigan and then later at UC Irvine, you can see there. But he taught my 1989 computer vision class, which is very important to me. And in that class, it was actually a graduate level class. I had to ask special permission to be able to take it as a sophomore. And they let me in. And we did things like image segmentation algorithms, taking the median over an image sequence to make all the moving objects disappear like cars going down the freeway and such. And absolutely my favorite assignment was something that basically would kind of continues to play into my career, which is taking multiple two-dimensional images and deriving the three-dimensional structure of a scene from those images here. So this was actually my assignment being turned in on binocular stereo. So we were given a bunch of stereo pairs taken a little baseline apart. I don't have both of them here, but there was also a stereo pair of a sandwich and the Pentagon of all things. And your goal was you had to figure out which pixels in this image or the same pixels in this image and then how much disparity there is, things that are very far away stay in the same place, things that get closer and closer have more and more of a disparity. And that allows you to derive a depth map for the scene. So either some of the somewhat blotchy depth maps I got from that stereo pair of the Pentagon. And even though you can see some errors here where like the depth clearly is wrong because there's some repeated structure and it probably confused one of the wings with the next wing out or something like that. I did well enough that they actually gave me 10 points of extra credit for accuracy of the disparity map. So I actually got 110 points on that particular assignment, which was a good thing. Right around that same time, my favorite movie had been Back to the Future growing up. And it turns out in 1989 and 1990 were the next two Back to the Future movies that came out, the sequels. And the very end of the first Back to the Future movie and the beginning of the second Back to the Future movie trying to hide the fact they actually recast one of the actors in between features this awesome scene of this time machine DeLorean. It's now been tripped out with a fusion machine. It's got flying wheels and stuff and it kind of goes down a regular suburban street and then flies back at the camera. Turns out this visual effects work was done with model miniatures and optical printing and industrialized magic. I thought it was the coolest thing and I also tried to imagine that maybe my Chevette was kind of like one of those DeLorean time machines a little bit. I think it's, you know, same basic shape, maybe it's just a different scale factor. Like if you were to shrink it there, you could kind of get sort of a DeLorean look. But the main difference, of course, my car didn't fly. So that's where I tried to do a little bit of my first project trying to do a 3D model of something from 2D photographs so I could animate it flying across the screen. And I didn't end up taking a bunch of stereo pairs like we'd learned in computer vision class but I came up with a different algorithm that involved taking pictures of it parked next to a parking structure. And this is the summer of 1991 kind of between my junior and senior years at Michigan and getting pictures of it from the front, the top. And the side. And then doing a volumetric intersection of all of these images here. So I basically made a voxel grid and any voxel that if it looked up and saw and hit the top image of the car forward and hit the front image and hit the side and hit the side image, it would say it's part of the car and then that's part of the volume. And then I would texture map everything by pulling pixel values in from the images onto the voxels faces that face them. And after that, I had a 3D model. It was textured and I could make this little animation that you saw pretty low resolution but the car flying across the screen. And this is not technically what I was supposed to be doing for my summer job that summer but nonetheless, everyone else at the Regional Health Resource Center came around my computer and saw the car flying across the screen and everyone thought it was really neat. I think just because it looked a lot more photoreal, it really looked like the real car with the scratched paint, the dented license plate, even view dependent reflections. Nowadays, you might try to take a few more images and create a neural radiance field and also have essentially a three dimensional opacity grid and view dependent reflections everywhere as well as a way to model the scene. And continuing kind of like fast forwarding to a more recent project that I got to work on at Google, we're still basically taking pictures of things from all directions that you can. Nowadays, we're kind of mixing it up with also trying to light stuff with special lighting in order to make relightable data sets. And just to move forward, like jump, this is a project from just a couple of years ago I worked on at Google in their augmented reality group and we built this device called a light stage. It lights special patterns of light on a person that help reveal their surface normals. We've got lots of infrared cameras and color vision cameras and we can use computer vision techniques to a per frame 3D model with some temporal consistency and then kind of some visual effects techniques to take these three dimensionally modeled people and insert them into a scene that they weren't in at the time and then actually try to light them with the light of that scene to create like a digital human avatar that looks a lot like a real person. And we can even do a little better than this these days of putting some neural rendering techniques on top. So that's definitely gonna be a big theme is walking around things and taking pictures and then trying to derive 3D models. And another project that I did that kind of completely revolved around this was my PhD thesis at UC Berkeley. So another very important thing that Ramesh Jane did for me is he advised on where to go for graduate school and in particular which faculty members I might want to try to work with. And he not only recommended that I try to join this young computer vision professor at Berkeley, Jitendra Malik, but also he wrote me a really strong letter of recommendation. And so Jitendra welcomed me into his research group like the day that I arrived at Berkeley whereas the Stanford professors that I had met with said, well, if you want to be a part of my research group you'll have to take my class and do very well and all of that. And so anyway, sorry Stanford, I ended up at Berkeley. And when I was again like kind of home for a holiday break I took pictures of my high school building back in Illinois which is kind of this gothic looking structure kind of interesting. And the idea here was let's fly around the high school. Let's 3D model that from photographs. And that's kind of a different shape than we had from the car. But a lot of architecture is kind of made out of things that look like geometric primitives like Frusta and parallel pipeds and wedges and things like that. So we had a system that I developed with a postdoc CJ Taylor. We had a library of 3D shapes. You could build your scene out of these 3D shapes but you don't have to say exactly where they are or what size they are or what their dimensions are. Because then you say to the computer, oh, this edge in this model is really that edge in that photograph and this edge in this model is that edge in the photograph. And it does a big solve. It figures out where the cameras were that took the pictures. It figures out their intrinsic and extrinsic parameters and it creates a, again, like we did for the Chevette, a textured 3D model of the scene which you can then fly around. And coming back to Berkeley where we had also a very nice clock tower over there. I had the idea since this paper was accepted to the Association for Computing Machineries SIGGRAPH Conference to follow it up with another venue that they have at the ACM SIGGRAPH Conference which is their computer animation festival. And the first SIGGRAPH Conference that I went to was, I think, 1994 and they had this film show called The Electronic Theater and they had clips of short films made with computer animation, art projects made with computer animation, research demos and sections of visual effects movies. I remember Industrial Like Magic had a piece in 1994 about how they made the visual effects for Forrest Gump and they got Tom Hanks to shake President Kennedy's hand and talk to him using some VFX which I thought was amazing to learn how they did that. So I kind of made it my next goal like, okay, I gotta make a film, I gotta get in that show. I wanna be in The Electronic Theater. And so I thought maybe we could kind of amp up what we were doing with scene reconstruction, recruit a team of Berkeley students and kind of try to direct a movie of flying around the Berkeley Tower. So I was lucky to get a little 3D model of the Tower Made and by taking pictures from my kite, pictures from the very top of the tower, pictures of the tower, we used the facade system to make a 3D model and got some pretty good looking renderings at the time and we could even transition back to the real campus and real stuff shot and kind of make a movie out of the whole thing. And this went and showed at the SIGGRAPH 97 Electronic Theater. That was one of like the biggest moments of my career is getting the kind of the FedEx which I knew probably had the acceptance or not acceptance letter as to whether we're in the film show, opening that up in my advisor's office and say, oh my God, we're in, this is gonna be fun. And as it turns out, one of the people who saw the Campanile movie in 1997 was a visual effects supervisor working on a movie where they knew they had to do some very specially choreographed camera moves through some scenes that the actors were gonna be shot with a camera array in a studio on green screen and then they needed to match a very specific move on this environment that would match the same view as all of the cameras that shot the actor. And they ended up hiring one of the students that worked with me on the Campanile movie. They got our software that we developed for my PhD thesis over there. And the visual effects supervisor, John Gata, even called me up and said, hey, we're about to go to Sydney, we're about to shoot these photographs on top of this building and we need to reconstruct the scene, what should I do? So I said things like, well, you should probably use a wide angle, maybe a 24 millimeter lens, try to tape the focus down so you're not solving for focus on every one of these things, stop down to F8 so that you've got enough depth of field and they were able to reconstruct for about five shots in this film that came out, the background of the scene, which would drop in and look pretty much just as real as that scene did, a couple shots before which were live action, a couple shots after which were live action. That produced the technology which was behind the virtual backgrounds, the virtual cinematography techniques for the bullet time shots in the Matrix, which was pretty cool because this was this dark horse visual effects movie nobody had heard about, came out in April of 1999 and everyone was super excited for a massive big new thing in visual effects that year which was Star Wars episode one was gonna come out. Like we had waited since 1983 to have another Star Wars movie, now it doesn't take that long. But everyone thought that was the odds on favorite, like Oscar for best visual effects is going to Star Wars episode one, how could it not? It has everything that industrial light magic could possibly throw at it and the Oscar goes to the Matrix. That was a fun one. Continue on the theme and kind of flash forwarding like other stuff like what was I doing at Google? That's interesting. I still think about that a little bit but we also got interested in trying to do cinematography where with very high quality, even for live action, you could move the viewpoint around in post-production and this was particularly important for a virtual reality application where if you have a VR headset, VR might be coming back, right? Is it possible? It was certainly, okay. A Vision Pro, let's hope. That if you wanna watch movies in VR, you definitely wanna shoot it 360, you wanna shoot it in stereo because you've got two screens to look at. But people are always kind of moving their heads around. Like even if you just turn to the right, you think about where your eyes start, you turn to the right, your eyes have actually translated. They're not over here anymore. And if you don't update the viewpoint that you're rendering those images from, it looks unnatural and unlike the way the real world behaves when you're actually looking around the real world and looking at actual stuff happening. And that is a very off-putting effect. I actually am one of the people, and many people do, get a little queasy watching a movie in VR if the viewpoint is fixed to where the camera shot it from and I just shift my weight and my seat and try to look around over here and the parallax doesn't shift my perspective within the scene. And if it does shift your perspective within the scene, then it's great because you get all sorts of great parallax effects. It feels more immersive, more natural, you don't get sick. It's a good thing. So some of what we would do is build, this is about, I don't know, three feet wide and we got a whole bunch of kind of GoPro knockoffs, cameras and sort of got them synchronized. And we would shoot from a whole sphere of viewpoints with wide angle lenses. And the idea is that all those rays of light that you might need to generate a virtual viewpoint inside this volume that you might move your head around, we're basically recording all those rays of light or close to it. And as a result, we had to come up with a representation and we even used a little deep learning for this to create something we called deep view video where we represent, we solve for a three-dimensional model of the scene that is a layered representation. So it's kind of a volumetric version of RGB values and each one also has transparency. So again, this is similar to some work that our colleagues did soon after this that actually represents all of that structure inside the weights of a neural network called a neural radiance field. And we were just trying to find a practical way to stream light field video where you can move the viewpoint around. So you can see here, you can actually take the scene where this guy was doing some sanding and there's some volumetric effects with the sparks and we can, in post-production, move the viewpoint around pretty convincingly. And then we take all of those layered pieces of depth, pack it into an animated texture map and then we can actually stream this as a 4K video into the headset and get a three-dimensional view that you can render in real time, even on relatively limited graphics hardware. So we can't really fly anywhere in the scene. We couldn't go and read what brand of tape that is, but if you're just trying to watch somebody do something interesting that's been filmed with a reasonably practical camera, then you can make that happen. And it gets pretty good depth maps of the scene too, which are used for generating all of those novel points of view. So I don't think Google ended up actually shipping this as a product. Meta has picked up on some of the datasets that we've published and there's some other good papers coming out about trying to do real-time light field video or neural radiance field video. And I'm hoping that with a new range of VR that we'll be able to actually have scenes that play in the VR headset like the Vision Pro and to see it three-dimensionally very immersively like we can in the future. So, going back to inspiration in visual effects, another movie that left a big impression on me was Jurassic Park, which came out, my god, 30 years ago now. And what impressed me so much about it is just how much it looked like industrial light and magic was able to put these dinosaurs in the scene with the actors. And presumably there were no dinosaurs actually used in making the film. So they had to come up with CG dinosaurs. That's its own story, which is kind of amazing. But I just thought it was so cool that you put the dinosaur in and it looks like it's seen from the right viewpoint, the right perspective, but most importantly, it looks like it's shown with the right lighting. And you can kind of see the sun comes from the same direction. The shadows have the same character to them. There's bounce light from the grass onto the underside. Kind of makes things look a little bit greenish there. It's pretty darn believable that these two guys are in the same scene lit by the same light. And that just seemed to be like a fundamental part of movie magic, of making it look like the real and the fantastic are occupying the same frame and thus they're in the same story. And what frustrated me about it is like, I didn't know how to do it myself. I couldn't figure out. I didn't know enough about lighting. And it turns out I didn't know enough about the artistry of how visual effects are done. And I would talk to people who worked at Industrial Light and Magic because they were in the same town, in the same area as Berkeley. And they kind of said like, well, we write down where the lights were in the scene. And then it's kind of a big trial and error process to kind of move lights around the CGI objects until it kind of looks right. And then that's what we put in the movie. And then the compositing artists go and fix it up a little bit in Photoshop after that. And that was not a satisfying answer. It looked like there should be an actual answer as to like, what is the right answer as to what this dinosaur would look like in this scene. And so I got interested to try to develop something as a post-doc at Berkeley, that ended up being called image-based lighting, where the idea is we're gonna record the light in the scene and we're going to use literally a record of the light in the scene to light the object. So how do we record light? Well, cameras record light, right? Those pixel values in your image are telling you how much red light and green light and blue light are hitting every single one of the pixels. The problem is, is that cameras tend to have a limited dynamic range. Going for zero to 255, you probably cover the stuff that you need to see a photo, but you're not going to capture the intensity of the light sources. Those will be clipped to 255 and they might be hundreds of times brighter. So one of the things that I worked on with my advisor, Jacinda Malick, is we published a paper called High Dynamic Range Photography in 1997 and it is one of our more cited papers, where you bracket the exposures from overexposed to underexposed. Hopefully in the most underexposed photo, none of those bright pixel values clip anymore. It's all within the range of zero to 255. And you then merge them into a floating point pixel value image where the pixel values can go zero to millions if they need to. That records accurately the full brightness of everything that you need to see in the scene. And if you shoot a picture of a mirror ball or use fisheye lenses, you can capture the full sphere of light around where your object's gonna go. And then you can get all of the directions of lighting from where like the key light or the fill light, the bounce light, the rim light, light can come from anywhere to affect the image. So these are a couple of these classic light probe images, these HDRI map images that I've put up on the light probe image gallery. And the process of image based lighting is to use another invention of computer graphics called ray tracing and global illumination to simulate that light on a new object. Except we're not gonna start by putting lights in a scene. We're going to put an image of the light in the scene. This is image based lighting. So we're gonna texture map this high dynamic range image onto some surface surrounding your object and we're gonna light that object with that illumination. And since I was really excited to have that Campanilli movie at the computer animation festival in 97, I had to follow it up with some stuff. So I managed to get films in the festival in both 98 and in 99. And those were called rendering natural light where I photographed this panoramic high dynamic range image of the light in the UC Berkeley Eucalyptus Grove and lit a little scene of CGI objects. And then to try to take it a little bit further and combine it with the photogrammetry and image based rendering technology of the Campanilli movie, I went over to St. Peter's Basilica in Rome. I got one hour of permission to shoot high dynamic range images and panoramas inside there. And I did a 3D reconstruction of the inside of St. Peter's. I texture mapped it with high dynamic range images. I grabbed another team of Berkeley students to do a dynamic simulation of monoliths and spheres bouncing around in there. And then we calculated all of the ray tracing and global illumination to try to make it look like all that stuff was actually in the Basilica and put out that scene. And they actually opened the computer animation festival in 1999 with our animation, which was super exciting and even more exciting. This has become basically the way that visual effects is done today. So just to pick one pretty random example from Digital Domain, a company we've worked with a lot in Venice, California at the time. Here's a scene from a movie called Real Steel and Hugh Jackman is teaching this robot to box, right? And I think in reality, there was another actor, like a stand in actor with a motion capture suit standing on a box there doing this. And then you have to replace him with this much more impressive big robot. And this shot works because the robot is clearly in the same place as Hugh Jackman. He's lit by the same light. And they got him lit by the same light because they photographed one of these high dynamic range omnidirectional images. And then in their renderer, they lit that robot with that illumination. And whenever you're taking a CGI object and you're adding it into the scene, there's always the opportunity it could look a little bit more real if the lighting is right. Now unfortunately, some augmented reality apps kind of just add a CGI object and they don't really try to light it right. They just have some random lighting. So you can kind of have the fun that you feel like this object is over there if it's tracked into your camera view, but it probably could look so much better if it were lit right. And when I was at Google working on augmented reality, this was a problem that came to us, which is like, is there a way that somehow, even without going out with one of these mirror balls or high dynamic range images, just from looking at the image of the background, we could figure out what the lighting is there so that we could light our inserted CGI object so it matches a little bit better. And it seems like maybe that's possible because certainly this looks wrong, so there's got to be a reason it looks wrong and the reason it looks wrong is because there's something back here that tells you that's a different light than that. So there's something in that background image that tells you what the light is, even if it doesn't have a mirror ball and all that. The problem is it's pretty limited information. If you look at what you see on a cell phone screen, it only is a very small piece of that 360 degree environment. And also, as I mentioned, it's clipped. It doesn't have the dynamic range. If there were, happened to be observations of the direct light sources in there, then it probably they would be clipped and you wouldn't have their true pixel values anyway. So we thought, well, here is a pretty clear opportunity for some machine learning. And what we did is we built a few of these devices and then sent people around, shooting imagery with a cell phone. And in the bottom of the frame, we had those mirrored spheres that we wanted to see that would tell us what the lighting is like. So we get a bunch of paired training data of here's the background that you can see above there and that that's what actual objects would look like lit in that scene. And since we have a very shiny object, a very diffuse object, that's something that's kind of halfway in between we found some way to even though without the full dynamic range, we can figure out what the lighting was like based on how it lights these three objects. And here's some of our training data going around. And since you just start the video on your phone, you get thousands of images pretty easily. And you train a model where you say, okay, this is the background. This is what the lighting should have been and eventually it gets good enough. You give it a background where you don't know the lighting. It'll give you the lighting and then you can analyze that and then add a CGI object into the scene in a way that it's lit, not as well as digital domain would light it by going out and shooting a light probe, but certainly a lot better than what we'd seen in some of the fun augmented reality apps. So if you now use ARCore and I think there's similar stuff in other AR packages, there'll be something that will give you an estimate of the lighting that's in the scene so you can light your CGI object with something that'll make it blend in a little bit better. Now going back to the visual effects problem here, you could ask yourself, we had a really good way of lighting this robot that wasn't actually here and compositing it in to make it look like it was there. But what if Hugh Jackman wasn't available? What if this is like some scene where he had another production or he was sick or just for whatever reason, we're trying to shoot this all in a movie studio, but we actually wanna make it look like he's out in this scene too. Well, if there are a way to do image-based lighting on Hugh Jackman, we know what the light should be. How do we get that light onto him? That became another little problem that we started thinking about when I got to the USC Institute for Creative Technologies, where I ran a research lab for 16 years before going off to Google. And this idea became conceivably possible when the blue LED came out. Like we had red and green LEDs when I was growing up. There were no blue LEDs. Those kind of showed up in the 90s. And by 2001, Color Connects was selling these little lights you could control over USB that could turn any color of red or green or blue. We sent zero to 255 in there with some gamma curve. And I had the thought that, you know what, if we just surrounded the actor with these effectively like real-world pixels, we could light them with one of these high-dynamic range omnidirectional images. And they should look lit the way that they would have if they were actually out there in the scene. And maybe this would be a useful filmmaking technique. So I wrote a little grant proposal. We bought, I think about 160 of these lights. They're about $100 each at the time. And we made a sphere out of it. And we found that yes, in fact, we can take our HDRI maps, we can resample them onto these 160-ish pixels, put the actor in their light them, and then they're actually lit as if they were in these environments. Now the other thing we had to do is we had to somehow like cut the actor out of that scene and paste them into the background. And that's that standard problem in compositing and a usual way to do that would be to put a green screen behind them. Well, green screens are not fully automatic and they also tend to put some straight green light on the sides of the actor called green spill. I didn't want to deal with any of that. So what we actually did is to get the actor's cutout map or sometimes called the alpha channel, I had them in front of a black cloth background that reflected infrared light. We put some infrared LED light on that and made a camera with a beam splitter. So there's a color camera here as a 45-degree piece of glass and then there's another monochrome camera which blocks all the visible light and it's still sensitive to the remaining near infrared. And so the second camera records the actor silhouetted against a brightly lit field and that's basically the image that tells you the alpha channel you need to swap in a new background. So if we take a look at some results from our light stage three, our LED stage project, here's the light of the Grace Cathedral light probe. We're doing very careful good sampling back down onto the lighting directions that are available. That's the matte image and then we can composite in the background and show the actor in that other scene. Here's another light probe image that was actually shot for that Fiat Luxe movie. Here's another actor. This is Alana, it's her name. We're compositing her into the Galleria dell'Uffizi. And we also wanted to show this on a three-dimensional environment. So I had another project we were making, a movie about the Parthenon and kind of putting the sculptures of the Parthenon back on. And so we had this CGI model of the Parthenon. We rendered a virtual light probe in background plate of going down the colonnade and we lit it with a light probe captured in the late afternoon. So the actor is basically gonna walk into shadow and walk into sunshine and walk into shadow. We played that lighting back on the actor as an animated HDRI map sequence. And Alana did her best job to kind of amble along there without a treadmill. But you can see that we have interactive lighting on her. And we even put these banners on the side and you can even tell that she picks up a little bit of red light from the red banner when she goes by it on the other side of her face and a little bit of bluish light when she's next to the blue banner as well. So we're really simulating the full three, six degrees of illumination. And this took a while before it got used in movies in a big way. We did a little bit of a collaboration on a movie called The Social Network where they needed to do some twinning of one of the characters. And that character came to our light stage. We played back lighting on his face for that. But the biggest application of kind of this LED stage technique that we got to be directly part of was for a film starring Sandra Bullock and George Clooney who they wanted to make it look like they were in space and with lots of kind of very interactive three dimensional things happening. And I think in the movie Apollo 13, they didn't go to space but they actually flew people like the anti-gravity planes to shoot some of that. This, they weren't planning on doing that because it's kind of expensive and wouldn't have worked for the range of shots that they wanted. And so the thought was that computer graphics at this point in 2013, they were actually making it in 2010 was good enough that they thought they could render planet Earth just fine. They could render a space telescope just fine. They could render even astronaut outfits, spacesuits just fine, visors just fine, but they didn't think they could render the human face just fine yet. And they didn't really want to put like CGI versions of Sandra Bullock and George Clooney up there in the shots. They wanted to put real Sandra Bullock and George Clooney. They're not cheap either. So let's put them on screen. Let's get their performances. So the film was directed by Alfonso Cuarón, one of our great directors of the age. And he actually came to one of our light stages to do an R&D test with us where we were playing back lighting. And this is one of the commercial light stages that got licensed out. So here's Alfonso Cuarón. Here's one of the visual effects supervisors, Chris Lawrence. We had a variety of cameras. We had a lot of lights. Tim Weber, the visual effects supervisor. And we had a couple of actors act in the light stage. Pretending to be in various states of peril, because a lot of stuff goes off the rails in the movie Gravity. And it was successful enough that the decision was to adopt this process. They needed to film in England anyway for tax credits. And to get particularly good reflections in eyes, I had built a light stage system, actually, a test version out of LED panels back in 2004. And they had enough budget to actually buy enough LED panels to build a 10-foot cube of LEDs around. And so that was actually constructed at Shepherd and Studios in London. So you can see that there's a little turntable to backers around. These doors can close in. So they're basically surrounded 360 degrees around by LED panels. And the environments that need to play back. See where the slides went? You've got to restart that. Our point just quit. There was too much good stuff. Let's see if it'll come back. If it doesn't come back immediately, I'll do a reboot. And we can ask a questioner, like it's coming back. Yeah, here we go. So here are some shots from Gravity. So Sandra Bullock, not actually in space, actually surrounded by LEDs. And the same lighting was used both for image-based lighting of the space suit and for real world image-based lighting, which we call lighting reproduction on the face of the actor. So the elements that were shot in that stage were pretty much lit right in order to go into the movie. And as a result, there were very convincing shots that were done. Nobody had any problem with the visual effects for the film. Even when there was some sort of crazy shot where essentially this is the pre-visualization, Sandra Bullock's character is tumbling out of control. And there's very synchronized, special image-based lighting that's playing back on her face. And you can see here's the final shot. And it really looks like she's in that digital space suit. So a huge congrats to the team at Framestore there. We continued to try to push this stuff a little bit further. One thing they did have to do with that footage in Gravity is they had to do quite a bit of color correction on it. And something that we had noted even back in the day when we did our first light stage paper on lighting reproduction in 2002 is that the light that comes out of RGB LEDs is weird light. It's not the broad spectrum illumination you get from daylight or incandescent light or a phosphor-converted LED. It basically creates all of the colors you can see by having just a little bit of red spectrum, a little bit of green spectrum, and a little bit of blue spectrum. And since the human eye only has three kinds of cones in their very broad response, you can trick the eye that it's seeing something that's like daylight. It'll look the same color as daylight with a very different spectrum than daylight. The problem is if that weird spectrum of light hits an object, interacts with the reflectance of the object, and then gets seen by a camera, you can get a quite different result than the spectrum that you were trying to simulate. So here are two ways of creating white with RGB LEDs and with a broad spectrum white LED. And you'll see that it'll light a white shirt about the same, because that has flat spectral reflectance. But the human skin reflectance spectrum, because human skin continues to reflect more and more light as you get toward the near infrared, and that red LED is quite a long wavelength of infrared. People who walk onto these LED stages tend to look way too pink. Lighter skin tones tend to go toward magenta. Darker skin tones tend to go toward straight toward red. Nobody looks right. Costumes tend to shift around in weird ways, too. So another innovation that we tried to make was to suggest that we shouldn't just light people with RGB LEDs, but we should add other LED colors. We built a virtual production stage. This is a PhD student of mine, Chloe Legendre, who I worked with at Google and at Netflix that has not just red, green, blue LEDs everywhere, but also amber and cyan and white broad spectrum LEDs so we can round out the rest of the spectrum. And that allowed us to do lighting capture and lighting simulation, which was far more accurate. And our ground truth comparison that we did is that we had a couple of actors, one standing on outside kind of in the late afternoon sun outside our institute, and one kind of midday shade outside our institute. We photographed them for real in those places. We then went in with our HDRI image capture, and we added these little color charts, which actually tell us about the color rendition of the light. It tells us enough information about the spectrum or the illumination so that we can then figure out how to drive all six channels of LEDs, the red, the green, the blue, the amber, cyan, and white, so that we can reproduce that light. So we then had the actors come up to the light stage, stand in the light stage. We shot a special mat as well so we could drop in the background. We also photographed the backgrounds without the actors and composite them in. And these are the results. So they're inside in the light stage here. They're actually outside in the environments. And with no manual color correction or any other futzing with the images, we got stuff to line up well enough that for three months I had the slide backwards and I'd mixed up the ground truth and that stuff. And Chloe had to tell me that there was a problem. I think it's right at the moment. And this actually got us a gig to build a big virtual production stage for a company in China called Su-V. And they are now using this stage with a little bit of green screen, but they know how to deal with it and to film actors and add them into scenes that have already been shot, but they can match the lighting on them so that they get a very convincing result. So that's been in production for a few years and used on some TV shows and films in China. Now, I don't know, I might have flubbed and when I talked about that light stage that we put all of those LEDs around, that was actually light stage three. And you might wonder, well, what was light stage one? Light stage one and two were built. The first one was actually my last project at UC Berkeley as a postdoctoral researcher. And I didn't have much funding back then. So it was actually constructed out of lumber. It had one light on it and it was pulled by ropes. And the goal of this was basically simply to be able to photograph an actor from every direction that light can come from. So the light would spin around over the course of a minute or so and we'd have some cameras photograph the face lit from every direction that light can come from. And that would give us this data set, which we call the reflectance field. Nowadays it would be called a one light at a time data set. And it turns out if you multiply that data set against the colors and intensities of the light directions coming from the environment and then add it all up together, it actually lights that person's face with a technique called image-based relighting with the light of that environment. So this is before we had RGB LEDs or enough money to buy light stage three. We could actually simulate what the actors would look like lit by every direction light can come from. And that actually produced the idea for light stage two which is a bit more professional and fast. It looks like light stage two has a full sphere of lights but actually it just had one arc of strobes which would rotate around in about eight seconds. And we could record these reflectance fields of people when we used it in our research. But at this point we were at USC, we were in LA, we were where they make movies. And the movie people that we'd meet at the SIGGRAPH conference asked us like, hey, could we send your actor over and record reflectance fields of our actors because we wanna use your techniques before we've really figured out how to render skin accurately in computer graphics. Maybe we can just photograph everything we need to know about how skin and light interact and then use that to produce digital stunt doubles that are lit properly with the light of those environments as a computational process. And as you can see here, Alfred Molina who played Doc Ock in Spider-Man 2, still one of the best Spider-Man movies became a really good digital stunt double for about 37 shots in the film. And even though this visual effects work was done 20 years ago, it still holds up reasonably well today because it was done based on data of the actor's real face. And they'd send other actors too like Brandon Ruth who became Superman, maybe not the best Superman movie ever made, but also a really important film in kind of history of computer graphics which was the curious case of Benjamin Button where they had this film where Brad Pitt's character ages backwards. So they had to find a way of like the first 45 minutes of the film, he's actually a CGI head replacement of an older version of himself. And Brad Pitt wasn't old enough to be that role yet, but they had some amazing Hollywood makeup people do this maquette of what an old Benjamin Button would look like. And they sent it to our lab, we got all of the lighting directions of it, we got a 3D scan of it, and that data helped digital domain make the head replacement which they would then animate for Benjamin Button. And here's some of our reflectance field data. And the funny thing is we still shoot this of actors. Tuesday at our studio at Netflix we shot some one light at a time data of actors. It's a standard thing because if you have HDRI maps you just multiply that lighting data set against the reflectance data set and it will show you what that character looks like in that lighting accurately. And that made it so that they could be very sure that the 3D CGI version with image-based lighting responded to light the same as the original maquette. And that produced some visual effect shots of old Benjamin Button that still looked quite accurate. And at this point enough movies that my lab had contributed to had ended up winning Oscars for best visual effects including the curious case of Benjamin Button. And that can become enough to trigger the people behind the technology to get some recognition as well from the Academy Scientific and Technical Awards. And so that ended up being our first Academy Award that we got for the technology back in 2010. Now I think we have about 10 minutes left. I wanna just cover a couple of things that I promised I would and then hopefully there's time for a question or two. Let me just skip a little bit. What I wanted to mention is that we also ended up inventing a way of using polarized illumination in a light stage to capture a three-dimensional model of the face that actually has kind of pretty much every skin core and fine crease. And this is another thing that I learned in computer vision class, which is a form of photometric stereo where if you light something from different directions and you see how the light plays off of the surface, you can per pixel get an estimate of the surface normal. And we figured out a way of extending this to spherically lit images so it works for all cameras looking at the object all around. And we also found a way to do it based on just the specular reflection of the light. When light hits skin, some of it scatters around under the skin and blurs out and emerges somewhere else. That's kind of the skin color subsurface scattering. There's other stuff that's the shine off of the skin. That's the specular reflection. And we found a way to optically isolate that by polarizing all the lights, polarizing the camera, flipping the polarizer on one and the other, bringing the specular in and out. And that can isolate just the shine of the skin optically. When we analyze the surface normals of the skin from that, you can get 3D models that actually have accurate skin pores and fine creases. And that allowed us to build one of the first photoreal animated characters that actually came out slightly before the curious case of Benjamin Button called Digital Emily. It was a little R&D project we did with Image Metrics. And that was pretty decent rendered human face for 2008. And that got us the gig to work on Avatar, James Cameron's small independent feature that he was doing. In the late 2000s. So they ended up bringing most of the cast of Avatar over to our light stage. There's Zoe Saldana, whose face and 3D scans would get transformed into the character of Nateri. Nateri is basically Zoe Saldana from The Cheekbones Down and then it borrows from her textures for her forehead and such. There were digital stunt doubles made of Colonel Courage played by Stephen Lang. There's shots in the film where he's totally, that's real. He's totally digital in that shot. And it's based on the scans that we've got of him. He's even animating and saying something. Sam Worthington's character, Jake Sully, is totally digital in this shot here because certainly Sam Worthington was getting paid well enough to lay down and just stand there under some lighting. But since there's so much interaction between his face and Nateri's face here, if they're both CGI and rendered in the same rendering that makes a lot of sense. And the actors kept coming. We had Tom Cruise come in for this movie called Oblivion where there's a twinning sequence. Both Activision and Electronic Arts have bought light stages from our lab to do video games. We worked on this movie Maleficent to help with a bunch of pixies. It was also a day that Angelina Jolie came into the office, that's fine. And we got also an interesting request from our friends at the Smithsonian Institute that they were wondering, is there any way that you could build a portable version of our light stage to come over to Washington, D.C. to scan a VIP who's over here on the East Coast? And they didn't tell us much more than that. I kind of figured out what they might be getting at and so I said, yes, I did not know actually how to do that at the time. But when the project really went live, this is a time-lapse shot over about four days of us. Off-frame to the left is the light stage which is just getting stuff stripped off of it. And we're building it onto this mobile unit here. We got 50 of the light sources. We got all of our sports photography cameras. And on June 14th of 2014 or June 9th of 2014, I had a pretty important subject that in the state dining room with the White House we got some high-resolution scans of. Pete Souza, the White House photographer is right here. He's gonna get a really awesome photo right at that very second. And he just finally emailed me like the final high-res version of that. That was pretty cool. This is all done under the portrait of Lincoln in the state dining room because he's the only other president we have a 3D record of from like a facial cast from when they were in office. And we got good photos. We got to do our polarized light trick. There, our subject was very good at making good expressions, staying still. And we got a super-fricking high-resolution model of President Obama's face. And what the Smithsonian wanted to do, they wanted to do a sculpture of him for the National Portrait Gallery. So they very, like rapidly the following week got this thing merged with a scan of the back of his head and his shoulders. They did it with a commercial scanner. And it was presented to him later that month. And it seems like he was at least amused by the whole process there. And just to wrap a couple last little things, other films that we got to work on. Furious 7, unfortunately, Paul Walker, one of the actors had passed. We ended up scanning his brothers. Caleb Walker and Cody Walker, to help Weta Digital create a 3D version of Paul Walker so he could finish his role in the film. And some stunning visual effects work with 3D faces. 300 shots of the film, which was not publicized at the time. Also, Blade Runner 2049, the character of Rachel was based on two actors. One was kind of, we scanned a sort of a skin donor. One was the original actress who played Rachel in the 1980s version, Sean Young. Those got merged together by MPC to do a scene that has a digital version of Rachel that's quite realistic. We helped Weta make Will Smith look like a much younger version of himself in Gemini Man. And once we had all of these films coming together and we had this new polarized gradient process that ended up being our second and most recent Academy Award that we were able to win. And I think at that point, we basically just have maybe a couple of minutes left for questions, so let me pop to my last final thank you slide. And just say, hey, it has been wonderful to be part of the University of Michigan. It will take you places. And thanks everyone for contributing to this work. Thank you so much.