 is where the world learns of the latest advances in display technology. From theatrical production to personal electronics, we are united by our goal to make displays indistinguishable from reality. We've been on this road for a long time, and many of you may think we're getting close to the end. Today, I'll try to convince you we've barely started, but at least we have a map of where we're going. It was given to us a century ago by Gabriel Lippmann. And here it is. In 1908, he said, is it possible to create a photographic print in such a manner that it represents the exterior world framed in appearance between the boundaries of that print, as if those boundaries were opened on reality? Here we are in 1908, proposing essentially a perfect television, a perfect window into a virtual world. Keep in mind how ambitious this claim was. When Gabriel started his work in the early 1880s, state-of-the-art photography was black and white. That's where we began with. And so, with a lifetime of work, where did Gabriel reach? Well, I'm happy to say he received a Nobel Prize for inventing a novel color photographic process. These are just a few of the prints he realized in his own lifetime. And we at the Society of Information Display are inheriting this legacy. When you go to the trade show floor, what you're looking at is Lippmann's window, or at least our attempts at creating it. And how close are we? Well, televisions are near retinal resolution, if not beyond, beautiful high dynamic range, beautiful color gametes. So all of us hopping on planes, have we reached the end of this road? Is it just incremental work for us? Displays aren't gonna get much thinner, they're not gonna get much more compelling. So what comes next? Well, Gabriel Lippmann's vision wasn't just about beautiful high resolution, beautiful dynamic range. There's a key piece missing in every single television you'll see on the trade show floor. It's a dream that many of us have had to create a true three-dimensional display to allow our head to peer to the side and look through that window. That simple moment is missing. And we've attempted it for centuries now using stereoscopes, viewmasters, red-blue glasses. Through whatever trickery we can, we're trying to create a 3D display, but we have not done it yet. We've created a near-perfect television, but a flat display. And so 10 years ago, I set off to the MIT Media Lab as a graduate student, and I thought, like Gabriel Lippmann, what should be my life's work? What is a challenge in displays that truly excites me? And so I met Gordon Wetzstein, who's now a professor at Stanford. I met Matthew Hirsch, who now has a startup on the very topic I'm about to describe. And of course, my own advisor, Ramesh Raskar. And together, we tried to achieve Lippmann's window. And here's what we made. It's called a multi-layer liquid crystal display. Rather than using one television, we decided to stack a few up, and by exploiting motion parallax, we created a compelling three-dimensional window. Here's what it looked like at the time. You'll notice with subtle head movements, you can see a correct parallax, a correct stereoscopic and auto-multiscopic image. But the real insight here is to no longer depend solely on optics, but to use computational imaging to split image formation, not just across the hardware components, but across the algorithms to actually exploit knowledge of the world in a light field and to make these strange, high-frequency patterns, these surrealist paintings that give you the illusion of a window. So there you have it. That was my graduate work. It was my passion. I set off to reinvent what's on the other side in that hallway. So you may find it very odd that today I'm a research scientist at Oculus. So in 2014, I saw the early Oculus prototypes, and my first response is that is cheating. I spent so much time getting rid of the glasses. How can you possibly tell me I'm gonna put glasses on my head? And so I got an email out of the blue in April of 2014 by Atman Benstock, now chief architect at Oculus. And he said, hey, Doug, how about you join Oculus? We're gonna start a research lab. I think it'll be fun. And he invited me to come see all their top secret prototypes. And of course I hopped on a plane. I think all of us would because I wanted to know what they had behind closed doors. Was it really better than Littman's window? So that day in Irvine, I saw a lot of things. I saw prototypes of what ultimately became the Oculus Rift and modern VR. I saw a lot of demos, but one of them proved pivotal and I wanna share it with you today because it changed the course of my career. So I put on this prototype headset and Michael Abrash, now chief scientist of Oculus research now known as Facebook Reality Labs, said, Doug, what do you think? Well, it wasn't very inspiring. I grew up playing video games. I've seen a lot of boring courtyards in sky boxes. I said, Michael, really? I got on a plane for this. I couldn't see him because I was wearing a headset. I'm sure he chuckled a little and he's like, look down. So I looked down and I saw this tiny little robot gazing up at me. Now the headset was well calibrated. Distortion was corrected. Chromatic aberrations were controlled. It was compelling. But I paused and I said, Michael, Othman, I was creating TVs at MIT five years ago that did this better. They were tiny little surfaces on a wall. You didn't have to wear wacky glasses. I'm not sure this is the future of displays. Once again, I think they were chuckling and they just said, how about you turn around? And so I swung my head around and right next to me was a looming giant robot about to crush me. This was the exact same 3D model just simply scaled up. And it took me a moment to take this in because no television would ever achieve this. No theater screen could move that foot so close to me. The only way to achieve this would be with glasses. Oh, there's one other way we could do it. And this is a vision of the future. I think we'll come to pass, but I don't think it's for everyone. You could create that giant robot if we turned our living rooms into holodecks, a television on every surface. It's certainly possible. Using the processes that you'll hear about this week, we could make living room TVs. Is this the future? Well, I think it will come to pass and it'll be absolutely incredible. This may in fact be the home theater of the future, but it's gonna be a whole lot of hammers, saws, nails, contractors swarming through your house. This just isn't for everyone. And remember I began with Gabriel Littman's vision. What was it he was trying to get at? Well, since 1908, in 110 years, something's changed. Littman's window is no longer a rectangle on a wall. It's a smaller rectangle in our pocket. Displays have become mobile. Most of our lives, we now look at a much tinier glowing rectangle. And I don't think the world's going back. Personal mobile displays are the future. And so you combine these two trends. How are you going to deliver a beyond theatrical experience to every individual in the world anywhere they want to be at any time? It has to be with wearable displays. And so, to my surprise on that flight back from Irvine, I signed on the dotted line and I said, let's join this adventure. We're starting with shoeboxes. It's gonna be a long ride, but I can't wait to see where we get. So I moved a little way as a way to Redmond, Washington and my first weeks and months. And really the first year, I spent a lot of time. There weren't many people at the lab. We were building it. We were recruiting it. I said, how can I bring about this wearable display future? What team? What group of individuals should I put together should work together to realize this dream? We worked on all the obvious stuff, right? So as a display engineer, I'm like, great. We need much higher resolution. Let's go do that. You know what? High dynamic range is amazing. Let's find a way to do that in headsets with good contrast. So we looked at all these things, but they all seemed like connecting the dotted lines. It was obvious, straightforward stuff. And so after about a year, Michael Abrash came to me with a second demo. And this proved again, just as pivotal as that robot. He said, Doug, you're gonna love this. It's even better than the robot because it's the exact opposite. I said, what do you mean? He's like, put on the headset, try it. And so this continues to be my favorite VR demo. It's called Paper Town. If you ever grew up playing with action figures or dolls, you know this incredibly rich experience. It's more powerful than a robot because VR isn't just about stereo displays. It's not just about immersion. It's about motion parallax. No 3D TV does a good job at this. But with VR, you can lean in, you can move your head to the right and left and you can grab something. And so this rich stereoscopic, this rich 3D experience really shined for Michael. He said, this is what VR is for. So I was excited to try the demo. We set it up. Unfortunately, anything that's hyped never quite delivers on that hype. And so on the left is what Michael saw, which was incredible. On the right is what I saw. As I leaned in, I'm like, oh, this is great. This is great. Oh, wait a minute. This is a blurry mess. Why is this thing so blurry, Michael? He said, what do you mean? And so I'll get back to that in a minute. But this was that pivotal moment. This is when I realized something has to be fixed in VR. Something will have to be fixed in AR. To make this the successor of Littman's legacy, we have to make sure resolution is good in the near field. What I didn't know at the time is this not a problem that one individual, no matter how stubborn, can solve. It actually took a team of over 40 scientists and engineers working diligently for the last three years to realize just the beginnings of the technologies that will solve this problem in VR and AR. I can't name all of them, but I can thank all of them. And so I really encourage you to look out at the work we do and keep a track of these individuals because they're the ones who really did the work I'm about to share. So let's go another layer deeper in the onion. Let's start understanding the science of what's going on. So why is it that near objects are blurry in VR? And secondary, why didn't Michael see the blur? Well, let's go back to paper town. Let's look at that fire hydrant. And let's just get out a piece of note paper. And think about how the eyes work. Pretty simple stuff. When you look at the fire hydrant, your eyes must rotate in their sockets. This is a physiological response known as virgins. When you verge, this is the trickier bit, your eyes must focus to maintain a sharp image on your retina. This is a reflex where the crystalline lens of your eye deforms and it's known as accommodation. The important bit from vision science and physiology is accommodation and virgins are linked. You may have heard of virgins accommodation conflict. There's no conflict in reality. Virgins and accommodation responses work in lock step. That's good in reality. So when you look at that fire hydrant, it's sharp and the background is subtly blurred. When you look to the background, the buildings are sharp, the foreground's blurred. So what happens in VR? Well, this is when you get out that piece of notebook paper and you realize there's a subtle problem. It takes a minute to explain, which is why I'm taking a few minutes to explain it. When you put in a VR headset, at first everything looks good. The buildings are sharp, but something is a little wrong. Can anyone spot it? Let me flip back and forth here. Here's reality when you're focused far. Here's a VR headset. Reality, VR headset. So it's a pretty subtle difference, but the foreground is slightly blurred and blur is a critical cue to accommodation. Now those of you who come from a computational background, you'd say, no problem. We'll add an eye tracker and we'll add synthetic blur. You can certainly do that. We don't yet know what type of blur to add, but the field is working on that question. But remember, the moment I was worried about is looking at those tiny campers next to the fire. So what happens when virgins in accommodation are linked and you look close in a VR headset? Everything's thrown out of focus because a VR headset uses a single lens to make a fixed focus display. Everything's always at about two meters. So when you look close, you're essentially taking the camera lens of your eye and throwing the focus knob completely in the wrong direction, which means it's frustratingly blurred. Now, why did Michael see it sharp? Well, those of you in the room over the age of 55 or so, I'll be there in a few years, I'll be entering this phase. You have what's known as presbyopia. I apologize, but that's life. And you probably use bifocals, trifocals, or progressive lenses. Your eyes are perfect for VR. They're not perfect for AR, but they're perfect for VR because a fixed focus display is what you want. VR is already better than reality, but eventually we'll have interocular implants. All of us will have beautiful accommodation, not just in our youth, and this frustration will not go away. So as an engineer, I said, great. This I'm excited about. It's super subtle. It takes minutes to explain, but I'm excited about it. Let's solve it. Get out another sheet of paper and you start writing down all the technical solutions. And there have been decades of inventions trying to tackle this problem. My own PhD thesis touched on several of these. These red, green, blue charts, these sort of things. There's no blue here. It means something else. These charts, everyone will color them differently, but the conclusions are usually the same. The really exciting academic ideas like light fields, you've probably heard of those, they make unacceptable trade-offs. When you're sitting inside of a consumer electronics company, when you want to actually change the world, it's not acceptable to lose 25X in resolution and have terrible contrast. So light fields right now, not looking promising. Another great old friend we're seeing on this list is holography. Holography is the one true future king of displays. We all feel in our hearts that across the hallway, it will only be holographic displays, always 10 years from now, every year it's 10 years from now. But we will get there and it will be incredible. Maybe those living room displays will also be holographic, but it's not happening anytime soon. And if it does, I'll be happy to be wrong. So you look at this chart and you really see two things that are gonna get us out of the bind. One is the obvious, just don't solve the problem. Use fixed focus and just try to keep content away from arm's reach. Sadly, VR is all about arm's reach, grabbing stuff, looking at things. That solution will only work for so long. So I think most of you who are engineers would look at this chart and say, you got one thing you might be able to try which is verifocal displays. Verifocal is mostly green other than two bits. The two bits that are hard are hard. You need to have eye tracking that works for 100% of the people, 100% of the time, or at least close to that. That's a small miracle, but we could do that. Number two, you need adaptive optics. You need to change focus in some manner so that it can track with the eye. That's verifocal, let's take a look at it. Let's see this at a high level on the back of the envelope. So you have paper town scene, we zoom out. It spans about a four to after range. So you're going from 25 centimeters to optical infinity. Here's what a VR headset looks like today. It's a single fixed focus at about two meters. Siwa in 1996 was the first I know of to propose that simple idea just explained. Take an eye tracker and move the focus dynamically to follow with the eye. Super simple concept. It's taken more than 20 years to begin to see implementations. I see Khan here in the front row, one of my acquaintances. He's built a verifocal headset at NVIDIA. Several of these are starting to emerge. My friend Gordon Wetzstein and Robert Conrad who presented the other day have also been showing verifocal displays. So the industry is just beginning to realize these technologies. And we at Facebook Reality Labs began our work on verifocal really that day, more than three and a half years ago, when I decided we'd tackle paper town. And so over the last three years, we've embarked on a strategic, methodical, slow paced series of prototypes that try to reduce this to practice. And so this was revealed to the world about two weeks ago, but the Society of Information displays my home. And really as an engineer, I want to tell you something that is news news, not old news. So I want to go one level deeper and show you over these three years how we incubated these technologies. So first up in April of 2014, we began a two month project to create our first verifocal headset. And we said we don't really care since it's never been done, don't care if it's noisy, don't care if it vibrates, don't care if it's expensive, don't even need eye tracking. Let's just see if paper town is any good when you lean in. And so Ryan Ebert, the lead mechanical engineer on this effort, said no problem. And in a month he created this headset. We turned it on and again, I told him don't worry about noise and vibration and he didn't. Here's what it sounds like. That's where you begin in research. It's duct tape and a bunch of LEDs glued on the thing. But this sways hearts and minds. There's a saying John Carmack had I believe, which is when you get a good demo, it's like religion on contact. You know that this is the right direction. And so I had every single person in the lab put on the headset, lean into paper town. And they're like, oh, you've been talking about this thing forever. You won't shut up about it. It's actually a problem. I get it. And it looks like this could solve it. What are you gonna do next? So it took us about a year, but we decided to do the next big step, which is add real eye tracking. So our second prototype, if the first one was climbing a hill, this was climbing a mountain. I went to Alexander Fix, Robert Cabin and the beginnings of our eye tracking team. And I said guys, can you give me a state of the art eye tracked headset? I don't know. In a few months, that would be awesome. And they did. And so we created the world's first eye tracked verifocal headset. There's a lot to unpack here, but I'll show you a demo we recorded over two years ago. On the bottom left are my eyes gazing through this headset. In the center here are the left and right images going, the raw images going to the underlying displays. When I turn this on, hopefully it'll appear on the screens. You can see a small gaze cursor tracking objects. This doesn't work for 100% of the people, 100% of the time. But again, it was to see if eye tracked verifocal was necessary and great. Here we have early prototypes of the touch controllers. We're looking back and forth. There's no way you could cheat this. This is what we learned, right? You could try to use hand tracking, depth maps, heuristics, but there's no substitute with verifocal with good eye tracking. So that was two years ago. I went to Michael Abrash and I said, I think this is it. We now know that solving focus matters in VR. It may not be as important as getting the basic stuff, super wide field of view, super high resolution, but eventually it will be the core problem in the displays. But I really don't think mechanical is the way to go. Anything but that. And Michael said, oh, why do you think that, Doug? And I said, have you heard the headset? Here's what this one sounded like. And in fact, work, it was just a little noisy. That's because we dialed everything to 11. We made the screens act, orders of magnitude faster than the human eye, the eye tracking was better than anything you'd ever expect. It was over engineered because we were trying to figure out what it needed to be. And so Michael said, well, maybe you're not a mechanical engineer. Maybe you should let Ryan tackle that problem for a year. And that's what Ryan Ebert and a half dozen other mechanical engineers did. And so over a year ago, they created this headset, prototype number three, which I think pulled it off. It shocked me when I watched this evolve that not being a mechanical engineer, thinking we could go from that popcorn prototype to something that is as silent as vibration-free as an SLR camera with a small team surprises me to this day. And it's what I love about being a development and research scientist at Facebook. And so here you go. This is what Ryan is team achieved. I have no involvement. I can't claim to be a mechanical engineer, but I love their work. This is what it looked like. It's pretty much what it sounds like and feels like. It's nearly vibration-free, nearly silent. And so that's prototype number three. Now at FRL, it's not enough to just make the papers, to make a cool demo. We're in the business of changing this industry. We're in the business of coming to this room and hopefully inspiring some of you and your companies to continue this journey in VR and AR with us. That VR and AR has not plateaued. There's stuff like this that all of us can advance towards Gabriel Littman's vision and beyond. And so we continue at FRL to try to advance these prototypes. There are many focus supporting headsets we're working on, one of which we revealed two weeks ago, which is called Half Dome. You can see each of these are always targeted at advancing one aspect of the concept. And so this aspect was one, as a gamer, I'm passionate about. You can never give me enough field of view. I go into a theater. I go into a surround theater. I say, not enough. Let's get more field of view. And so this prototype leverages work by our VR optics team, who created a wide field of view, a viewing optic, that we then integrated with improved versions of all the systems I just described. And I'll zoom in on it. So on the left, you see a normal rift. On the right, you see the Half Dome prototype with eye tracking and wide field of view optics. And so again, we have not reached the point of diminishing returns. VR and AR will get better year over year. And us in this room, that'll make those displays better. In this case, we already know that doubling the field of view is certainly possible on the trajectory we're already on. Here's what that demo looked like. So this is Ashley just picking up a device and this is the beautiful moment when you're like, great, great, super blurry. So how does it look in headset? This is shot through the lens. You can see with verifocal turned off and the camera focus near, modeling version's accommodation, everything is blurry until you turn verifocal on. And the moment the eye tracking system drives the moving screens, you see a sharp image. So there you have it. But that's a whole lot of work to make a circuit board look good. After all these years, why do I think we really wanna solve focus in VR and AR? I think there's one standout use case and it's not the giant robot, it's reading. Don't ask me why we use cassette tapes in VR. It's retrofuturistic, I guess that's cool but that's what we're looking at here. But without verifocal or some other focus supporting technology, it's gonna be a blurry mess unless you wait long enough until you need those bifocals and trifocals and then AR will be frustrating. Okay, so that's verifocal. That's a little bit more than we've shown before. Hopefully you find it inspiring that eye tracking and adaptive optics could be combined in a modern VR headset with a lot of effort by 40 plus individuals for three years. Okay, so along this trajectory, we've not just been working on verifocal, we've been working on everything. And we're always asking ourselves one question. And the question is, do we really need the eye tracking team? Yes, we do. Can we somehow avoid eye tracking? It's one thing to have one miracle. It's another to have two in a concept. So depending not just on great optics that adapt and eye tracking means we're really stacking the deck against ourselves. And we do respect that eye tracking team. We believe in them. They will deliver the concepts we need for our prototypes but we're always trying to eliminate that eye tracking. And so you look back at that chart of ideas I had earlier and you say, is there anything that could happen sometime soon that would solve the Virgin's Accommodation Conflict? And there's one idea that was a little green, a lot of red, and that's multifocal displays. Rather than have an eye tracker move focus dynamically with your eyes, present multiple focal surfaces in rapid succession. This is possible now because we have fast adaptive optics and fast displays. So it's not too many stacked miracles but it's a perceptual question. The real question you should ask yourselves, if you came to F-Roll, you might be asked in an interview, how many planes for how many diopters? Well, here's the problem and part of the answer to that question. Multifocal displays are like a focal stack in a camera. If you don't have enough focal planes, things will be blurred between the planes. Here's a simulation. If I take an eye chart, that bottom line there, it looks perfectly fine if it's presented on a display layer. But multifocal displays, if they're doing their job, causes your accommodation to work correctly. And in VR and AR, you can walk anywhere you want. And so that display layer in general, that the content will never, probability zero of ever being on a display layer exactly. And so you're always gonna be looking at it between layers and it's gonna be subtly blurred. And once again, for text legibility and reading, you're gonna need more layers than you're willing to create to solve this problem. So that's the bind to multifocal is in. We set out a couple of years ago in parallel to see if we could make multifocal great. One way you might solve it is to combine it with verifocal, move those planes to where you need them. This is a concept we introduced a few years ago called adaptive multifocal. But really, there's just a host of questions you have to ask here. And none of it involves building a beautiful headset. All of it involves building something to do vision science with. And so I'll show you in a minute what we call the multifocal perceptual test bed. But it really had this set of goals. What do you really have to do? How good does the eye tracking have to be for verifocal, multifocal, adaptive multifocal? For the rendering crowd, how do you actually decompose content across these layers? These are all open questions in our field. And so this is the device that we built. And this is, again, what I love about being research science. Is this kind of crazy nonsense? Would never fly. It's not head mounted. It may look insane. It has dozens of lenses, but it's the work of one man, Yisuf Sulai, a brilliant optical scientist at FRL who painstakingly designed and aligned all of this. And then I worked with the photographers to photograph it. That was my contribution. So there you have it. The world's first comprehensive multifocal test bed. Let me dive into all the subsystems Yisuf worked through. First up, we took some of that eye tracking technology you saw earlier, packed it onto the table so we have a state-of-the-art eye tracker. Next up is the novel bit that Yisuf has a special background in, which is wavefront sensing, which allows us to measure not just where your eyes are verged, but where they're accommodated. Then always go overboard when you're trying to understand requirements. We have three displays for the left eye, three for the right eye, a bite bar and an IPD adjust, so it's a six display headset. Here's what it looks like just for one eye. We took some of that verifocal technology and we implemented not just a verifocal, but an adaptive multifocal display. So this can test all known modes of multifocal in real time. Now you get that great optical science to throw in a bunch of beam splitters and lenses. You route it across the table to the iBox. Now it's pretty much a VR headset. Throw in some eye tracking rings and then take a super luminescent diode, shine it across the table onto your retina, back out of your eye onto a wavefront sensor. And what you can do with that is you can measure your eyeglasses prescriptions at 30 to 60 frames per second. And this allows you to answer the real questions. Is this driving accommodation correctly? And do we need eye tracking? Remember, all of this began with this simple goal. Can we get rid of the eye tracking to make all of this more practical? My personal feeling is signs point to no. This is where we're at. On the right, you can see what things look like with eye tracking. On the left, you can see what things would look like with just a couple millimeter displacement of your eye. So with active eye tracking, we can align these layers. It's pretty easy to explain this. I always like to make things interactive. If you pick one of these monitors and point your fingers straight at it, then move your head just a little bit. If you're doing this right, your finger is no longer pointing anywhere close to that display. Congratulations, you've experienced motion parallax. And that's the real rub with multifocal displays. Due to motion parallax of your pupil moving in the eye box, these layers will misalign and you'll get that blurry mess on the left. So figuring out this trick is one of the key puzzles to unlock to make multifocal displays jettison the need for eye tracking. Otherwise, we're back where we started with a much more complicated system. Okay, so I have one more system I want to share today, which is the most ambitious academically. We are a stubborn bunch. Everyone I hire, I think the key feature I'm looking for is they are just stubborn. They don't listen to me, they don't listen to anyone because they believe in their dreams. And that sounds trite because it is, but really Gabriel Lippmann was my dream. I wanted to create that window and it drove me to do it. Many members of my team just said, no, multifocal displays, verifocal, cute engineering, but no, we gotta do better. And so Alexander Fix, Nathan Matsuda and myself three years ago set out to create something that was different than the others that somehow solved the problem. And this is what we ended up with. We said, if you can never have enough focal planes, why not create a focal surface? Just bend the focus to follow the trajectory of the scene to try to hit as many points as possible. There's no camera I've ever heard of that does this, certainly not a consumer camera. And that's because why would you do this? How are you actually gonna dynamically vary the focal surface at 90 to 120 frames per second in VR? It sounds sort of nuts, but we have a background in computational displays. And so we said, even if we miss focus, we can take that old trick of deconvolution and slightly sharpen the image. So we don't need to hit every single 3D point. And if that's not enough, then you can use the multifocal trick and strobe planes in rapid succession. But hopefully you need far fewer planes than multifocal will need for a good display. So this is it, one last Hail Mary pass to try to get around the need for eye tracking. According to simulation, focal surfaces should do a very good job. When you look into the distance without eye tracking, you'll see a sharp image and everything works correctly when you look into the foreground. So this is what I love about internships. You work all this out and then you punt on the problem and you say, I have no idea how you're gonna vary that focus. Start asking people, ask people in interviews, ask people in the hallway, talk about it at conferences, it doesn't matter, just solve the problem. And so many of you who have never seen this work, you probably already are going to the races in your head on how you'd solve the focus, locally varying focus. There's lots of ways to do it. Some of them will take decades to create the device. And this is what I love about coming to Los Angeles, coming to a conference like SID. It's not about the presentations, it's about the expo hall. And really when you're trying to solve a question like this, you don't go to the big, giant, glowing booths. What I learned from Ramesh Raskar, my advisor, is you go to the smallest booth that look the most pathetic because that's where the future lives. And so I'd go around to those tiny black booths and I'd say, what do you guys got? And they'd say, this is a weird thing, check it out. And that's usually how a good idea starts or a bad idea. And we decided that something many of you know about, a phase spatial light modulator could do what we want. It can create digital lenses, but we were really using, if not abusing this device. It's meant to use with coherent light. It's meant to use with field sequential color. We did none of those things. We just packed it into a VR headset. And so that's what Nathan Matsuda, who's joining the team this summer, did over a two year period. I'm incredibly proud of his work. Also his animation skills are unsurpassed. So you have this little box, you blow it up as Nathan does. The goal here is to make it look sophisticated, but it is not. It's just a VR headset. It has a lens and a micro OLED display. The focal length of the lens is a little longer than normal to give us some room to work with. And with that extra room, we add a phase spatial light modulator, a pair of polarizers. And when all of those things are acting together, what you create is a digital compound lens. This is a way to get a no moving parts focusing system. So that's good. It's tackling a different side of the coin we're having here. When a wave front leaves the display, first it gets a little bit of focus adjustment locally by this digital free form lens made by the phase SLM. And then most of the aberration control, most of the optical quality comes from a high quality refractive lens. Now sadly, when you go to the somewhat abandoned booths on the edge of the expo floor, they don't always have exactly what you want. What I want is a high fill factor transparent phase spatial light modulator. What I can find is a high fill factor, incredibly small pitch, great reflective phase spatial light modulator. So you gotta throw in a beam splitter for today. But everything works the same. Wave front leaves, passes to the beam splitter, gets modulated, it goes back, gets attenuated by some polarizers, goes to a light dump, and you have a focal surface display. So now you button all that back up. Once again, you ask your great mechanical engineer, hey, can you create one of these in a week because we have a paper deadline? They're like, no problem. They do some machining. They create two of them, put them on an IPD adjust, and you get a focal surface display. Now, the real important bit is what astronomers always call first light. How does it look? All this effort, all these simulations, how does it look? At first it looks absolutely awful. It's not color balanced, there's flicker. It's just terrible, even with the best camera. But slowly and methodically, you get to a point where it starts to look promising. And so here's what the device looked like the last time we touched it in terms of the paper. And so you can see a four-diopter scene like Paper Town. The car is in sharp focus. It's not digitally blurred, it's optically blurred. And without any eye tracking whatsoever, as you look into the mid-ground, the background, everything comes in and out of focus correctly. So is this it? Do we really reach the finish line? Do we, can we get away with just one focal surface? Well, as a computational imaging researcher, you really just have to trust the data, right? We're using algorithms, we don't know exactly what they're doing. They're a black box. And so you just use a database of lots of scenes and you ask, how did it work out? So this is the one boring chart I have in my talk, but it's the most important one because it frames everything you just saw. If we're trying to find a practical solution that exploits ideas of multiple focal planes, whether bent or flat, here's how it works out. Remember, diopters are inverse meters. To give you a sense, in the US, eyeglasses prescriptions are prescribed in quarter-diopter increments. When you ask a brilliant vision scientist like Marina Zanoli on my team, what should it be? According to literature, that red line is where we want the focus air to be to make a great replication of a scene. And what you'll see is multifocal, it's real weakness is you need too many planes to span a large dioptic range. And of course, it takes things like that multifocal perceptual testbed to really know if that's the case. Whoops. Let me go back here for a second. With adaptive multifocal, you can make things a little better, but you still need too many planes that need eye tracking. But with one little focal surface and all the complexity it brings, you might be able to get rid of eye tracking. But I think to make things great, you need more than one focal surface. And so we're back where we started. We're still finding that eye tracking is absolutely instrumental technology to try to solve focus with today's components. So that brings us full circle back to the giant robot. So if you asked me four years ago what I would work on a doculus, this wouldn't have been it. I actually thought virgins accommodation was a subtle problem. And it wasn't until I saw an object so close to me, closer than you'd ever have in a 3D theater, that I realized that the virgins accommodation would produce so much blur. It wasn't just uncomfortable, it was frustratingly blurred. And that's why we set off with so many people for so long to solve this problem. And so it's surprising to me that this big immersive moment that we strive so much with televisions came for free in VR. But the thing that's so easy, just reading text, became this grand challenge for my team. And so to sum things up, I think there's one takeaway message. And threaded through all of this was the solution of a problem of focus. But there's something deeper going on here. It's a realization that displays are no longer glowing rectangles on a wall. They are now, and they'll probably be that way for many decades. But I think the one true future is a mobile wearable display where you can have beyond theatrical experiences anywhere. And if you're gonna do that, everything's gonna change. These beautiful projection displays have to work for all the thousand-plus people in this room. That's a hard problem. To have that viewing angle, the dynamic range, the contrast over such an enormous field of view, it's not just challenging, it's wasteful in terms of optical efficiency. And so here's the magic trick I'll pass on. VR and AR allows you to cheat, not just by wearing glasses, but by only worrying about two eyes in the entire world. It's a personal device. And not just two eyes, each display only addresses one eye. That, with good work, can be tracked. Every photon, every frame, every single resource of that display can be targeted in right down the barrel site of one eye. And I think that insight, the idea of a reactive display that goes beyond a computational display, that's what will pull the future in. If it would have been decades to build that living room, hopefully it'll be years to see some of these concepts in a wearable display. That's the true difference a wearable display can make. And so now, to come full circle, I return to someone who inspired me, Gabriel Lipman. And I wanna show you this quote one more time, but this time I'm gonna give you the full passage. What Gabriel was really getting at here, if you read that last sentence, is that he believed in his dream. I don't know if he knew he would get a Nobel Prize, but he knew that as an engineer, it was possible to create a perfect photograph. And we in the Society of Information Displays, we believe it too. You see everyone showing the best in the world across the hallway, but we have to realize engineering is a finite endeavor. That glowing rectangle on the wall is getting incrementally better. So it's either gonna start be tiled across our living room or we need a new challenge. And so I hope for some of you in the room, you'll be inspired to tackle wearable displays. And I think that gets at what Gabriel was really going for. It was never about the engineering in the first place. It was about turning these windows into doors and stepping through them so that we could create a platform, an infinite canvas for unlimited stories.