 So we're here at the eye zone and hi, so who are you? Hi, I'm Pranit, I'm a graduate student at UNC Chapel Hill. So this is the University in the North Carolina at Chapel Hill. So this is a variable focus augmented reality display. Fairly focused. Yeah, so the currently available augmented reality displays like Microsoft HoloLens or Meta or even the currently magically, they all show virtual imagery but they all are snapped to a specific depth plane and that causes a lot of discomfort in viewing virtual imagery. What does this do when you open and then close it? So this is a variable focus single optic AR display. So you can see that there's a deformable mirror membrane which is changing its shape and the change in shape is coming from air pressure. This is air? Yeah. So there's a flat surface, there's a membrane and then the shape is changed based on the air pressure and each curvature corresponds to a specific depth for virtual imagery in the real world. So it's very important to have a variable focus because when you do AR you want to focus here or there or what? Yeah. You want to see things different places in the field? Right. So there's something called accommodation and virgins conflict that is basically arising from virtual imagery snapped to a specific depth plane and so you would have to roll your eyes like that to fuse the images and especially that's a very big problem in AR displays because say you want to focus at three meters but all the virtual imagery are snapped to one meter and so you cannot focus both real and virtual imagery at the same time. So it's important that virtual imagery appear at different spatially registered depths and so variable focus becomes very important. Did you see the keynote? I've been hearing about... You said he's trying to do something for VR, right? Right. What are you doing for AR? So I'd say VR in some sense is easy because you're not looking at real world so it's like completely blanking real world and then you know you're sort of okay with doing anything in the rendered, in the totally virtual world but when it comes to AR it becomes important that you also have an unapproved view for real world. And what is the demo right here? It's the same demo as you can see in the video. It's an automatic implementation of the static prototype. So there are a bunch of electronics that is controlling the air pressure and then there's a rendering pipeline that renders according to the set coverage of the membranes. And these membranes and the effect and the quality of the optics and everything How does it compare? Is it good enough? Like everything will be possible with this or... Oh yeah, it's currently in the prototype or a proof of concept stage but it's not very far from being into a real product I'd say. Yeah, we currently hit something like theoretically well to 15 cycles per degree but this could go as high as 30 cycles per degree with using correct light engines and I'd say it's not very far from reality. So right now it's active, this one? This one's not active. Fortunately when shipping we broke out, we will go around working with it. So will you need to have eye tracking or something like that to make it the final product? Oh yeah, this definitely would need eye tracking because you would want to set the virtual image depth based on where you're looking and so that should definitely come from binocular eye tracking. And what you're showing right here with this air is it possible that it can be fast enough that it can be usable? Oh sure. So we're currently working on a version which uses very fast linear actuators to quickly change the air pressure and that could go something like 500 millimeters per millisecond kind of actuators. So some kind of billion dollar company comes over here and says here is a billion dollar can make it work in like six months? Is it possible or impossible? Six months probably would be like a very hard target especially to shrink down the form factor. Yeah but probably in a year it shouldn't be very far from reality. In a year? In a year I think. Are you going to do it? Sure. What's next for you? So there are a couple of other things which we are trying to target. Currently the membrane shape is a uniform thickness membrane. So the shape of membrane is not locally controllable. So we're working on a version where we can hit locally controlled target shapes of the membrane and a couple of other problems related to these things. All right. Make things smaller or compact or? Make things more compact in a very good head mounted form factor not actuators sticking out. So the aim of all these people as a community is to shrink these air displays to form factor of eyeglass. You want your glasses to have this stuff? Yes. What is this PCB here? It's a display. So this display is reflected off of the membrane and that is relayed into your eyes. So the virtual image which you see is actually coming from this display. So the effectiveness of the resolution of the overall display is basically dependent on what kind of display we're using. So the higher the display resolution is, the better your image is. How makes your team? We have a huge collaboration. We were collaborating with NVIDIA, we were collaborating with Saarland and MPI and of course UNC Chapel. How many students? So currently we are a team of four students, three postdocs and three professors. Which one are you? Sorry? Oh, I'm a student. I'm a graduate student. So how can we study this? So it's part of a PhD dissertation. Probably show David. David Dunn is a PhD student whose PhD dissertation is primarily. Alright, so did he get the PhD with this? Oh yeah, he's going to graduate probably next year. Cool. And you? I just finished my second year and I'm going to graduate maybe in two or three years.