 Good morning everyone and welcome to the second day of the second annual open simulator community conference 2014. We're really delighted to see everyone here this morning and very excited about a terrific program for today's schedule. For those of you who are joining us for the first time today, we hope you enjoy the in world conference facilities and have found your way to the keynote regions. Just as a reminder, be sure to check your groups. You should be in an OSCC zone group and you'll need to teleport to the corresponding keynote region of your zone group assignment. So we'll hope to see you in the keynotes and we hope that you enjoy today's program. So yesterday at the first day of the conference we had some really great sessions. In the upper panel in the morning, Philip Rosedale spoke about high fidelity and the metaverse yesterday afternoon, and we had a tremendous program of sessions in all of the breakout tracks. If you missed the sessions yesterday, don't worry, we do have stream archives up and available on the use stream channels. And if a recording is missing, we'll be doing some post production work to get those sessions up as soon as we can. If you missed yesterday's session, you'll be able to view those streams soon on the web. Today we also have a really exciting program coming up here shortly. I'll be introducing Steve Laval from Oculus VR, who will be talking about their attempts to really bring VR to the mainstream audience, and that will be very exciting. But I wanted to talk a little bit today. Yesterday I talked about where we have come from in the last year since the first OpenSim conference. OpenSimulator itself, the platform, has come a long way since our last conference, and the technology surrounding virtual reality in the metaverse is progressing so quickly. It's such an exciting time. But today I thought I would talk a little bit about what it means to be a community. You know, we call this the OpenSimulator community conference because it really is a sort of grassroots event. And I just wanted to mention for those who are here for the first time today, some of the developments in the last year, it's not only the technology developments, but I think it's also the way that the community is working together in ways that perhaps it didn't in the past. When we put out a call this year for volunteers, when we asked for moderators and greeters, we had over 80 people reply. And when we asked for help to cover the costs of this event, almost a third of everyone who registered chipped in as a crowd funder. So I just wanted to say thank you to all of our crowd funders and all of our volunteers and all of our staff who really are putting on a great event. And I think today is going to be hopefully a little smoother. Yesterday we had some glitches with streaming. Hopefully today things will go a little bit better on that end of things. But, you know, this event really wouldn't be possible without all of the people in the OpenSimulator community who are working so hard to make the platform great. And, you know, one of the things that people often ask is, you know, how do I get more involved? Where do I find out more information if I'm looking to learn more about OpenSimulator? And I thought I would make sure to share the main website for the OpenSim software is opensimulator.org. And if you haven't looked at the OpenSimulator website, please do. There's tons of great information there. And the other thing that you can do if you're interested in being more involved with the community of developers and users is to join the mailing lists. And there are, I'm going to paste into local chat for those of you who are here in world. And I'll also put this in the stream for those of you watching with the web stream. If you haven't joined a mailing list for the OpenSim users or developers list, I would really encourage you to do that. We're all so busy. Everybody's got jobs and everybody's doing different things out in the metaverse. And it's easy to sort of lose track, but the mailing lists are a really great way to keep in touch with the developers and to learn from all the other people using OpenSimulator. So it's a great place to ask questions and to get help when you need it. And if you're thinking about running OpenSimulator or running your own grid for the first time, it's amazing how many brilliant people there are who can help you troubleshoot any problems that you're having. So if you haven't run an OpenSim grid, I would certainly encourage you to try. And don't be afraid. And we'll hope to see more folks out there joining the mailing list so we can keep in touch in between conferences. So for those who are attending for the first time today, I also wanted to run through just a few quick housekeeping notes so you're comfortable with how things will work and hopefully keep things running smoothly today. We do encourage everyone to make sure that you have auto media, or I'm sorry, auto play checked for your streaming, parcel and prim media. So if you haven't done that already, be sure to go into your viewer preferences and turn on auto play under the audio and video tab for both parcel and prim media. Some of our speakers are going to be using media on a prim and it'll work much better for you if you have auto play turned on. We also ask that when you teleport in, when you teleport into a presentation space, it would be really helpful if you could sit as soon as you get in. So any of the breakout spaces or here on the keynotes, as soon as you teleport into the region and things res for you, make sure to sit that will really help the performance of the simulators. You may have seen in your email today, hopefully everyone got a message that we did convert all of the streaming tickets to in world tickets. The grid performed so well yesterday we didn't have any, you know, any real problems with with performance. So we've opened the floodgates and everyone who wanted to come in world can today. So we're expecting hopefully to see a little bit bigger crowds so it makes it even more important that everybody sit in those presentation spaces so so that the simulators can handle the load. Of course, in all the other regions, you can feel free to walk around and explore. And I would also, if you haven't yet, we have a little game going in the grid here called the open meta quest, and it's testing your knowledge about the metaverse. And the game has some really fantastic prizes. So if you haven't had a chance to play that today. I hope you'll take the time. You have to sort of search for the clues and they're in various regions around the grid. Particularly the expo regions where our sponsors have really great booths and fun things to do in the expo regions. So if you haven't had a chance to do that, please, please do. And then finally, if you're having any technical issues, our staff are around in the regions all day today. You should notice they're wearing blue OSCC staff shirts. Feel free to ask them for help. If you have questions, if you're having trouble hearing or seeing, or if you're not sure where to go, they'll be able to assist you with anything that you need. And if you're having account issues, if you're watching this on the web and you haven't been able to get in world, please do check the FAQ page on our website first. And if that doesn't solve your problem, then you can feel free to email us for help at open sim. Okay, we'll be streaming obviously live throughout the day on all of the various track channels. You can see the full conference program on our website at conference.opensimulator.org. The streaming link will appear at the top of every session description, so you can always check out what's happening with all the different tracks on the web. And you can tweet with the hashtag OSCC or OSCC 14 and we'll be watching for your questions and comments on Twitter. So with that, I think we're probably ready to introduce our keynote speaker today. I'm sure that everyone who's watching and in the audience was as excited as I was with the announcement of Oculus Rift. The first time that I wore the DevKit one, it's been some time since I had a sort of technological really like a wow moment where you have that feeling that, oh gosh, this is a game changer. And the first time that I tried the Rift, I had that, like I got goosebumps, I'm getting goosebumps again thinking about it. It's sort of that realization that the way that we've been interacting with this technology can be so much better that it really is a game changer. And then the difference between the DevKit one and the DevKit two, if you've had the opportunity to try it, again, it's just shocking how much presence you feel being in the virtual. And I think that the excitement around the Rift has really reignited the discussions about virtual reality and the metaverse and all of the technology that we're working so hard to bring to users. And I'm absolutely delighted to have Steve LaValle here today from Oculus VR to talk about what they're doing and what their plans for the Rift will be. So Steve started working with Oculus just a few days after its successful Kickstarter campaign, and he's been leading their R&D efforts and led them up to their acquisition by Facebook last year or earlier this year. He developed perceptually tuned head tracking methods based on IMUs and computer vision, and he's also led a team of perceptual psychologists to provide principled approaches to virtual reality system calibration and the design of comfortable user experiences, which as we know is so important. He's also a professor of computer science at the University of Illinois, and he has worked in robotics for over 20 years. He's perhaps best known for his introduction of the rapidly exploring random trees algorithm of motion planning, and he has a book called Planning Algorithms. So thank you Steve, and welcome. Thank you very much. Thanks for the introduction and for the opportunity to speak here today. I really, really appreciate this and I became familiar with Open Simulator a couple of years ago because while working at Oculus, I lived on one side of the UC Irvine campus and the Oculus headquarters was on the other side of the campus, outside of the campus, but in the back of the campus. And I rode my bike across it to work every day and in the process stopped in a lot and got to know a lot of the great faculty at UC Irvine and Walt Skake and Krista Lopez and Krista Lopez. And I talked a lot about Open Simulator and I was very excited about the project and the effort and so I'm really glad to speak here today. So title of my talk is Virtual Reality, How Real Should It Be? And I'll give you some ideas about that as we go along here. Let me change to the next slide. So FLEEP already gave some idea about my background. I thought I would talk a little bit about that. So I've spent over two decades as a robotics researcher and in the process got familiar with a lot of say mathematical problems, especially involving sensing and filtering and motions through three-dimensional space. And my most of my background has been in the area of motion planning where if you see the images up there, those are taken from Marcelo Kalman's work who's a professor at UC Merced, University of California Merced. Suppose you want to tell a humanoid robot to reach into a refrigerator and grab something, you would like to give it a very high level command and then have the computer or the algorithm figure out all the intermediate motions to accomplish that so that no accidental collisions occur and hopefully the motions look reasonably realistic. So of course already that deals with to develop these algorithms and develop software for them. We're very comfortable with virtual environments and simulations in those environments and some basic physics and kinematics and these kinds of things. Mainly for the purpose originally of providing tools for robots, but then these algorithms ended up being used all over the place well outside of that. Some by people in the game community and many other areas such as virtual prototyping and so forth. So I find it very interesting that I was able to get close to the game world, video games and the software engineering kinds of aspects of that by having a robotics background that kind of ports over to that, let's say. I've also been a professor at the University of Illinois, I've worked at Stanford and Iowa State before, so I mainly have an academic background and I'm mainly going to give some, let's say research oriented kinds of ideas and thoughts and so I probably as a researcher I tend to find more questions than I answer so I think many times figuring out the right question to ask is the hard part trying to identify the problem. So that's the kind of thing I'll be focusing on today rather than talking about super technical details on what we did before. I want to talk more about what some of the interesting challenges or problems are. And finally I'd like to say I certainly love open source so it's very exciting to see open simulator. We've written some open source small libraries for doing motion planning and doing sampling of transformation groups that are related to those kinds of problems. Also, my textbook planning algorithms has been online for free. I've always been, I didn't make it open in the sense that anybody could go and edit it like Wikipedia but it's been open for at least to the community as a free textbook. It was also published by Cambridge Press but it was very important for me to make it open for anybody to download and access since about 2004. And that was actually how I got into Oculus because people were searching for some of the basic concepts, some of the basic kinematics, quaternions, some of the basic mathematics that were needed for head tracking and they came across parts of my book. So I had no prior background in virtual reality or in the game industry or in industry in general but I got in that way a couple of years ago. So some of that came about because of my efforts to make things open and available to the community. So I think that's an excellent model. So let me advance to the next slide here. So to talk a little bit about my work at Oculus. So this is actually related to open source in the sense that my first six months while working at Oculus, which started a few days after the Kickstarter, I started off working as a consultant and I was in northern Finland at the time working in the city of Olu. I was attracted to Finland for a number of reasons. I was on a sabbatical working on a book project so I needed to have some sort of distance from my daily job in order to focus on the book. I was writing on sensing and filtering, which ended up being related to head tracking for Oculus as well. But I also chose Finland because it's the home of Linux and I've always been curious about the culture of the Nordic countries in Europe that contributed so much to open source and pioneered so much of that. And so I thought it was great to be immersed in that culture for a while. But somewhere in the middle I ended up pulling out of there unexpectedly early and moved straight to Irvine, California to work with the Oculus team. And so it was in September of 2012 when I started with Oculus doing some of the basic math coding and kinematics, working on the basics of head tracking, ended up working. My wife Anna, your show was in the picture there, so we ended up working together. We also had a robot there in the apartment in Finland that had the DK1 sensor board on it and we were doing calibration studies and sensor tracking performance studies. And so that's how we sort of got into things. When I ended up going to Oculus and joined full time, taking leave from the university in March, I ended up being the head scientist, worked on all sorts of things. Mainly around head tracking, but then eventually as I'll give some indications in my talk, I got very excited about human factors, perceptual psychology and human vision. It became very clear right away that we have to understand how the human body works at some of the lowest levels in order to really make VR effective. And this especially became clear even at something as I would think as maybe distant or separated from human factors as just simple head tracking. It ended up being very important even for that and I'll talk about that in a bit. So that's kind of where things came along and then the summer I returned back to the University of Illinois. So I lived back in Illinois. That's where I'm talking to you from today, from central Illinois. So I'm a half time resuming faculty member there and half time working for Oculus VR. Let's see, go to the next slide. So what was my motivation for jumping in? So I didn't have prior virtual reality experience. I did have prior experience with Second Life over a decade ago. I was experimenting a lot with that just in my own free time for fun and really enjoyed that. I really enjoyed the opportunity to interact with people to try different avatars and had a lot of fun with that. And I always wondered what the next generation of that would be like even if I wasn't working directly on the area. Virtual reality research was very expensive. The equipment was expensive. I took all to build a big lab to do that and it was very hard to make comfortable experiences certainly 10 years ago or more. I had an additional personal interest. So one perspective that motivated me while I was in fact living in Europe and Finland in my previous sabbatical in 2004. I was living in Poland. That's where I wrote my book on planning algorithms and my connection to Poland is that my grandmother and her sister were separated after World War II and my mom came to the United States when she was about five years old with my grandmother but my grandmother was not able to find her sister. They were both Polish but they were displaced in Germany working on farms. Their agricultural workers are from small villages and so they only got to see each other twice in 60 years after that. And it was really an unfortunate situation in some ways because my grandmother and her sister had an aunt in St. Louis, Missouri in the U.S. so she really wanted to bring both of them but they were separated and I ended up bringing my grandmother there for that picture there in 1993. She was 73 at the time and I never really could get my grandma to go back there again and so they talked by phone. Before that they wrote letters. They talked by phone. A couple of times got them to have a video Skype conversation. I thought that was really nice but I thought wow it would be really great for people who are distributed around the world especially if they have limited mobility as I watch my grandma and her sister age even more. The trip became so uncomfortable that imagine being able to attend things like birthdays and weddings of family members, grandchildren perhaps on the other side of the world and more generally not necessarily just for elderly people who are separated far from family but people with limited mobility in general. I found that really an inspiring kind of application of virtual reality. It was exciting enough for me to jump into this. In addition to all the other industry and applications and such I just found this a really personal, interesting concept. So let's see. I'm going to go to the next slide. I'm kind of wondering as I've gotten in what is virtual reality. I've noticed in recent talks of Philip Rosdale of course spoke yesterday has given an interesting definition that I like and he gave this one at SVVR conference a few months ago. Virtual reality is a sensory experience in which the results of our actions are consistent with past experiences. As he was commenting there's a lot of not so satisfying definitions that are very much pinned to the current technology rather than let's say the effect that it has on the human body or how the human interacts with these artificial worlds that we construct. The Merriam-Webster definition that I looked up was also, as I was poking around, also seemed fairly satisfying an artificial environment which is experienced through sensory stimuli such as sight and sounds provided by a computer and in which one's actions are partially determined what happens in the environment. So I went through some iterations yesterday, tried to come up with some definitions and concepts and tried to refine a definition and one of the things I came up with is I wanted to make it a little more general than humans because I became aware recently of a body of literature of virtual reality being applied to insects, animals, all sorts of other organisms and the reason why I find that fascinating is because I want to emphasize the fact that we are an organism, a living organism that's caught up in this technology while we are using it and it's sometimes easy to forget that while we're immersed in it but it ends up being very important to remember that and to pay attention to that, to focus on that as we develop these kinds of systems. So one definition, I reserve the right to refine my definitions in the future but one definition I found reasonably satisfying today is that it involves inducing targeted behavior in an organism, a living organism by using artificial sensory stimulation while the organism has little or no awareness of the interference and that last clause by that I'm alluding to the feeling of immersion and higher than that, the feeling of presence but it was not necessarily any direct reference to consciousness or other things like that but you're behaving in some kind of targeted way, the technology that's been used is causing you to behave in some targeted way but you're not really very aware of that or concerned with that while you're experiencing it. So the reason why I use the word organism, I was just thinking about we are animals of some kind and so we're presented with some sort of alternative stimuli and that's causing us to behave in some way that's familiar to us so I often think about in the upper left there you say I often think about gerbils on a circular wheel so running along and so it's known that rodents somehow comfortably take off running and get a lot of exercise even though they're in a cage they seem to go into this mode where they're perfectly happy to run almost forever and to a point of exhaustion inside a wheel and humans in our efforts to try to get more exercise it was the case that we got plenty of exercise in the wild and then after enough technology improvements so-called improvements we end up sitting around not doing very much and so people invented treadmills to get the same kind of feeling, you feel a little bit immersed I suppose in the fact that the ground is moving and you feel like you're running along, of course it gets boring pretty fast unless you have some visual stimuli and obviously you can connect VR to treadmills and that kind of leads to the bottom part of the slide which is people have applied virtual reality to insects, there's an example there of a cockroach running on a spherical foam ball and it goes running along on essentially an omnidirectional treadmill but the cockroach is presented with visual stimuli and this was actually done by researchers a physicist at the University of Olu in northern Finland coincidentally the same place where I was when I started consulting for Oculus but I didn't even know about this work at the time and they study the neural activity of the insect while it goes running along essentially it becomes seemingly unaware that its environment has been manipulated somehow and then when I see something like the Omnitread virtual reality extension or peripheral in the lower right there you see humans wanting to do the same kinds of things presented with artificial visual stimuli so I just think again I want to emphasize the organism part of it or in our particular case the human side of it to understand that to engineer these systems effectively it's very important to consider the interaction between the human and this technology that's being presented to it or the stimulation that's being presented to it, alright. So how real should virtual reality be? I made that the title of my talk so it depends on what you want to do I guess is the easy way out of the question so one thing of course is to target on the task I sort of take that kind of approach because the same thing has happened to me in robotics in that if you want to make a general purpose robot and you're not quite sure what you want it to do it tends to drive the performance specs to a very high level we might imagine oh I just would like to maybe make a humanoid robot that has all the capabilities of humans and then it can handle all the different kinds of tasks that maybe I can imagine perhaps I can go beyond what humans can do and design the robot with more and more exotic capabilities but very often we have a very targeted application like maybe the goal is just to vacuum the floors and then you end up with a robot that doesn't look like a humanoid pushing a vacuum cleaner you end up with something very specific for that and so at the higher levels in the way that VR gets used I think we should think like that as well I don't think very much about the task what's the targeted kind of application that you're designing it for and so maybe you want to take a university course so as an educator I think about that so what aspects are going to be most effective for learning in that situation maybe you want to maintain a long distance relationship so that would probably involve a different set of different amount of interaction I would say a different level of interaction than taking a university course maybe you want to play a first person shooter game maybe it's networked maybe you're playing it alone maybe you want virtual travel like the example I gave with my grandma and her sister perhaps you want the people, the participants to feel like they really are in some familiar place or if it's really virtual travel maybe they want to go to some place and become familiar with it that they haven't seen before but they've heard so much about so you might want to take a trip to another country maybe hang out on the beach and so maybe levels of realism in that case become much more important perhaps you want to watch a live theater performance or recorded performance could be a movie like performance perhaps you want to sit in a virtual movie theater so what kinds of things are needed for that and finally maybe you want to sit and write code in the virtual environment so what should it look like to be able to do that effectively can you do that can the technology get to a point where you'd rather write code inside of VR than sitting in front of a couple of giant monitors so I think it's very interesting and for each one of these kinds of tasks and many more that are coming up the amount of sort of realism or so-called realism really depends on what you're trying to do for the task what you're trying to accomplish however in spite of all of that I do want to emphasize and that's something I'll get into in a bit here is that it's important to maintain a certain baseline is that we have to maintain a certain level of comfort and safety and in addition belief in the sense that you're immersed you feel present there but the first two parts comfort and safety are something that we took very seriously and of course that's the part that has the potential and has been most difficult for VR in terms of getting it to be adopted in a widespread way is to make sure that it's a comfortable experience for everyone especially for long-term use and so I want to give some insights into where we came from with that and the kinds of things that we've been thinking about and what I think the community should be thinking about in general so as I said in my own work I started off with working on head tracking for the first development kit and so what are we doing for that? Well we have an inertial measurement unit that basically is measuring linear accelerations and angular velocities from a gyroscope which is a MEMS device which is the same type of technology that you have in your smartphones but perhaps a little more advanced in terms of specifications but it's all leveraging that kind of technology which we're very grateful to have that in addition to the display technology that came from the smartphone industry that enables you to get fairly close to a comfortable experience right out of the box which is what was able to happen with DK1 after you solved the software challenges so I ended up applying my robotics background which involved a lot of searching and sampling on the space of 3D transformations and so not just applying transformations but really understanding the space of all transformations and in that case developing efficient algorithms that avoid numerical problems, avoid singularities, kinematic problems that are well known in the aerospace industry for example or if you make flying robots that are well known known by motion planning people so it ends up being a kind of tracking problem on the space of 3D rotations for DK1 and it ends up being on the space of rotations and translations for DK2 which I'll mention briefly in just a bit so a lot of the code we ended up writing ends up just being careful over quaternion space and finding the right representations to make things simple and efficient and clean for that and hopefully hide as much as possible of the complexity that developers don't need to deal with on a daily basis to make their job easier so stuff gets sort of hidden. As we start getting into the human perception side I should say that one of the challenges we had is that if you just take gyro data and start numerically integrating that you end up with a problem called drift and before too long the world becomes tilted we can use accelerometer data to try to correct the tilt but if you correct the tilt too quickly then people might start to feel seasick because they perceive the corrections being applied all of a sudden you feel the world tilting so you want to correct it very slowly but if you correct it too slowly then it's not keeping up and then the world will eventually end up tilted after a longer period of time if you're applying a slow rate of correction so we had to use a robot to study the maximum rate that could happen it turns out the faster you turn your head the faster the airs tend to accumulate from the way the men sensors work so the drift rates would grow faster so we had to make sure after doing a lot of experiments over several different sensors in a collection because there's variation across sensors there's temperature dependencies all kinds of crazy things come in you just want to make sure over the entire kind of collection of misbehaving sensors let's say that the corrections you apply are above that threshold so that they're guaranteed to clamp down and compensate for this tilt drift it's not too high that you end up perceiving them greatly so luckily we're able to get into this kind of sweet spot where we can apply the corrections and you don't notice them happening so much they're kind of below the thresholds it seems to be same thing happened with yaw correction we had funny problems with as people were playing cockpit games the cockpit gradually started sliding over to the side because even though we corrected for tilt you had problems of not being able to figure out you had problems determining which way you're facing and so we ended up having to work initially with an uncalibrated magnetometer and figure out ways to just kind of get it automatically calibrated and working while developers were not even aware of that happening magnetometers are very difficult to work with they're not exactly like a compass in the pure ideal sense I wrote some blog posts on that if you find this kind of thing interesting I'm not here to talk about the details of that too much but I found it really challenging and fascinating to eventually solve the problem so that you're more or less facing in the same direction rather than wrapping around in your chair gradually over time and getting tangled up in the cables while playing a cockpit game so getting into the human side of it even more we had to deal with latency so anyone who's worked in virtual reality knows that latency has been the most serious sighted problem for a very long time and the problem is that when you turn your head the images that you see should appear to be in the right places if I'm sitting here right now as you can see the oculus appears to be on my forehead so I'm not actually in virtual reality I'm actually looking at a screen so as I turn my head and I look at different people in the audience in the screen here I don't perceive a problem with latency I can look at your various avatars it's fine but in virtual reality you turn your head and the images have to keep up you have to present your eyes your body with what it expects to see and so there's some delay in that you measure using your sensors send that over USB there's a little engine picture there I took this slide from the other people inside of oculus and so the engine represents the game engine so it's going and calculating along what the scene should look like so I scanned that out and the pixels all have to switch which with development kit 1 took up to 20 milliseconds for pixels to switch depending on the color and brightness differences you ended up noticing that as a kind of blurring but eventually you get out this result which could take 30 to 50 milliseconds or longer if your game engine is taking a lot of time and so that ended up being a difficult problem we decided to attack that problem from two different ends one of them is to use image warping which is an idea that's been around for a while you may have heard of image shifting or image warping which is after the game engine step when you have updated the world you do some, you grab new sensor readings and perform some quick post processing let's say to the images that you've generated to try to fix them as well as you can it doesn't solve all problems there are fundamental visibility issues you can look up visibility algorithms if you're interested in those kinds of problems 3D visibility and 2D visibility algorithms interesting issues come up you can't see features that were invisible when you did the calculations before but it does help significantly the part I worked on mainly was predictive tracking and there's a lot I could say about that it was an interesting technology and that predictive tracking has been around for a long time but in virtual reality researchers who worked under the 90s mostly felt that it wasn't very helpful because you had to predict too far ahead into the future and they didn't really have enough sensor data to really predict the trend in modern times it ends up being a lot easier because the sensor data that we think is very dense it's no problem to get gyro readings at 1000 hertz and it's quite accurate to get the angular acceleration directly and then try to extrapolate what that's going to look like oh 20 to 30 milliseconds into the future is not so hard the one thing I like to say a lot is that there's no free will at 20 milliseconds the natural momentum of your head is going to pretty much enable us to predict what you're going to do when it gets up to 100 milliseconds or more then it gets much harder to predict what you're going to do so that ends up being kind of a counterintuitive surprise for people who have worked in the field longer so one of the things I discovered that really made me interested in perceptual psychology is that as we started making predictive tracking methods this just shows you a simple plot I guess I can't illustrate this too easily I didn't make an animation script for this but if I just knot my head back and forth in like a no gesture turning side to side then I looked at the angular error that we got from the tracking method that we have and if we did no prediction at all then in the average case we had like a degree and a half at some reasonably sane kind of speed of turning my head back and forth and the worst case error could be up to about 5 degrees or so so I was really happy with this method that just extrapolates the angular velocity to predict the trend it only took a couple hours to hack up on a weekend while I was consulting way back a couple years ago but then I got a little fancier, got a little overconfident and tried to make a method that used acceleration in addition so I tried to really estimate the trend in terms of both velocity and acceleration and the average error and worst case error were less when I did my experiment so thinking of myself as a good scientist and a reasonable engineer I thought well this is going to be better everyone's going to love it and it was an interesting surprise when they tried it the best method was actually worse so this new method that had better angular error in the worst case and better angular error in the average case ended up being a worse experience and I thought all that can't be but I really trust that one of my colleagues at Oculus Peter Giocharis he was the lead for unity so for bringing Oculus to unity so that was the main thing he worked on he was also very sensitive in his outside of work life he's an audio file DJ was very sensitive to distortions and problems in audio systems and I think he kind of mapped those skills over to video problems and so he said take a look at this new tracking method you made and when you just stare in one place in virtual reality my new method had a little more jitter in it but he said when you turn your head I agree that it looks perfect it looks much better so the angular error was in fact when you turn your head but the jitter was a little bit worse too so it turns out that as you try to be more aggressive with prediction in other words as you try to see the future it tends to introduce more jitter and in some sense prediction is the opposite of smoothing which causes latency so if you try to smooth date out just like this kind of moving average I show in the plot there then the red line is the trend that comes from making a moving average so smoothing no matter what you do it tends to introduce latency and so prediction is like the opposite of that as you try to reach into the future and try to predict what's going to happen it tends to amplify jitter or increase jitter so given that that's the case it caused me to think about it and make perceptually tuned filtering so it had two kinds of extreme modes so when you're not turning your head very quickly say you're mainly remaining stationary that's when you perceive jitter so human perception is coming in but not latency and then when you do a quick motion you perceive latency but not jitter so the idea was to do perceptually tuned filtering which is not like a standard kind of filtering or tracking method that you find in control and dynamical systems literature and standard methods applied to robotics because there's a human in the loop here with very low amount of latency so it becomes very important to get these things right so we predict further when moving quickly but we do little or no prediction when stationary and instead do more smoothing and so that's how we ended up introducing these kind of tracking method with the orientation only work then of course eventually again considering human factors you need to introduce position tracking as well because you could move your whole body side to side let's say without rotating your head and then it becomes disturbing to not see the changes that your brain expects in the world so we eventually added positional tracking as the latest development kit too we also worked on some of the other kinds of problems which is making the pixels switch faster so switching to OLED technology helped with that we can switch the pixels very fast they switch down to I think about 80 microseconds or so rather than problems up to 20 milliseconds so it's quite fast switching if you leave the pixels on for the entire frame then you get a problem called jitter so it kind of looks more like you're looking at a you perceive more of a sequence of images rather than a continuous fluid rotation so because of that we flicker the display so you turn it on let's say for typical numbers like a couple of milliseconds and then it's off for the rest of the time pitch black then you have a problem of perceiving flicker which might remind you of problems on if you're old enough to remember CRT monitors and using those there were ergonomic standards and recommendations that the refresh rate needed to be well above 60 hertz it needed to be above 70 75 typically 85 was a high number so we're striving to achieve those same levels of comfort we have 75 hertz for DK2 but we're going to be continuing to be motivated to make it even higher refresh rate to make sure that we have it very comfortable also we are increasing the display resolutions as much as possible so DK2 is 1080p and Crescent Bay prototype that was shown at our recent developers is even higher than that things are going to continue to go higher up until we reach comfortable let's say limits of human perception because again the human is really important here so I now want to switch to a part in the talk here where I want to give some different ideas of what happens again this is human centered or human oriented but some of the kinds of problems we have where I thought this would be appropriate for the people who have experience with open simulator and with live in that we have problems adapting our expertise so for those of you who have extensive amounts of experience dealing with this platform that you've contributed to that you develop on or that you just enjoy for entertainment purposes whatever you might be doing with it perhaps it's education I think it's important to not be over confident as you move into virtual reality so just to give a couple of examples that's a quote from the original Tron movie there on the other side of the screen it all looks so easy so some things that look kind of easy on a screen end up not working so well in virtual reality so here's a couple of examples so the panel on the left shows an adaptation that was made to a game so one of the first person shooters that we adapted for VR if you just keep all of the geometry the camera views the cut scenes all these kind of things the same it quickly leads to uncomfortable experiences in a lot of cases and in some of the cases there ends up being I think kind of odd social cues happen and so here's one of the cases where on the left there that was rendered for for viewing on a screen but if you put the virtual camera in the same place in virtual reality you end up looking right at the center of the image and that makes the person feel like they're short right and so just as I know many of you can easily change your avatar height in Open Simulator in Second Life and so you've had a lot of chance to experiment with avatars of different heights I know people have done studies on that as well Jeremy Balanson at Stanford for example has experimented with how people interact in a Second Life environments with a variety of different kinds of avatars so this is accidental in this case people ended up being very short and asking why am I so short in the space well it wasn't intended it's just that it was adapted directly from a screen same thing for the panel on the right this particular character ended up looking much more intimidating in virtual reality than on a screen so the character was very large and powerful looking and that sense of presence has quite a different say social communication that it causes right social implication that it causes so I find that interesting so as you move from this environment that you're comfortable with looking at this virtual environment through big screens and move into virtual reality I just want to sensitize you to these kinds of problems and issues if you haven't experienced it already chances are many of you have already seen these kinds of things in which case sorry if I'm beating a dead horse but I never cease to be amazed at how often in all the aspects as we develop for virtual reality that our confidence that we bring in bring to the table let's say ends up working against us the experience or expertise works against us so being aware of what you know that's useful and what you know that's counterproductive and being able to separate those out ends up being one of the most important skills in this space so again doing simple kinds of adaptations let's think about the problem of locomotion so I can take the keys here on my keyboard and I can move my avatar around I don't think I'll do that right now I'm afraid to screw things up but I'll just keep standing here yes I'll keep standing here at the podium but I won't mess things up so as you move your avatar around move yourself forwards backwards side to side called strafing maybe you might want to do torso rotation but most of the people are experiencing virtual reality sitting in a chair for safety reasons we certainly recommend that to the vast majority of people unless you've really carefully set up your environment for walking around you can do a lot of damage to yourself or your hardware and so most people are sitting down so we mostly recommend and that's where the product so far have been targeted so as you're doing that that's all perfectly fine maybe while you're looking at a screen it's reasonably comfortable and it simulates very much how we walk around in the real world so that feels great it's compatible with that however as you get into virtual reality something gets in the way which is the vestibular system so our vestibular organs the vestibular system has two main components behind each of our ears and what I found really fascinating as I studied that is that it contains sensors that are almost identical to the sensors that we're using to track the head so I thought that was really incredible here we are making this technology using mem sensors that we weren't really trying to mimic the brain with them we're trying to mimic the human body with them we were just trying to get the engineering information that we needed to provide the right visual stimulation to the eyes and working against us are the body's natural sensors that are trained to measure the same things and let us know that everything's going right in the world helping us with balance and comfort in the real world and so I found it really amazing the semi-circular canals measure angular acceleration which is very close to the angular velocity provided by the by the mem sensors that we use and they ended up being designed in such a way in our body so that there's three orthogonal axes that they're measuring along much like the three axis gyros that we have just blows my mind it was incredible to me like wow there's the three axis gyro right there in our heads, two of them in fact and then the ultralith organs called the utricola and saccule end up measuring linear acceleration and you get all three degrees of freedom from that it actually has some redundancy it's actually providing four degrees of freedom worth of measurements at least but nevertheless you can transform it and it provides exactly the kind of data I didn't find a magnetometer in our heads but you know some animals have things like that as well of course as you know alright so the interesting thing is as you try to do locomotion you have a problem called affection this is the illusion of self-motion so the problem is you're trying to move your character along and you perceive optical flow with your vision system I'm just showing some arrows here as flow lines so that as you track features in the environment especially at your periphery where you're not really focused on it but in a wide field of view virtual reality system which we're all excited about your body ends up being most sensitive to this even though you're not focusing on it directly and many of you may have experienced affection before if you're in the United States for example you've been stuck in traffic before if you're in Europe or Asia you've been stopped on a train before let's say of course we all have experience of both perhaps more commonly in Europe and Asia maybe stuck in a train and you notice at your periphery that the train out the window to the side starts moving but actually you perceive yourself as moving backwards and it's a bit disorienting usually you catch yourself pretty quickly maybe it only takes a couple of seconds oh wow I'm not moving backwards the other train's moving forward well that's an example of vexion and yeah it's quite disorienting and in virtual reality if you're not careful there's a lot of time in a careless way and for very long periods of time so that's not a flaw of the hardware technology itself the hardware may be quite comfortable it's that the experience has been designed to keep causing vexion for you over and over again right so so it's very interesting that we're causing this sort of well-known phenomenon for vision scientists people who study human vision and this mismatch is being caused intentionally to try to get more realism it's just important to understand that this realism comes at a cost so I found it really interesting so once I understood this mismatch you know I find as many areas of research I find that I reach different levels of understanding so my first level was what is vestibular mismatch I don't know nothing about this so I started researching it and presented some of the things like I just showed you then people start to say and I hear this comment from people they say well moving in VR while motionless in the physical world is bad that's a bad mismatch and then it's like well people start to think that's level two of understanding let's say but then people thought about it a little more and they say well actually it's just bad having mismatched accelerations because that's what your body is measuring so it seems like that's level three that since your vestibular system is like an inertial measurement unit as long as they're matching then you're probably okay so that means moving along at constant velocity moving along straight not turning and moving along at constant velocity that should be comfortable right but you know I'm at level four where I'm not really sure that that's comfortable either because I don't think we get a lot of experiences where there's no kind of rumble we're not on a vehicle and we're just looking down and the environment is just smoothly moving along right so you know sure maybe maybe on some mag lift train it might be smooth enough and you can look out the window but I don't know how long I want to look down and see things moving by and consider that to be comfortable so I'm really not completely sure it's a lot of interesting research directions to go in like for example one of the things we tried is you can suppose you need to move your avatar forward you can either move it up gradually to the desired speed or you can just instantaneously jump to the speed you'd like to move at counterintuitive to me at least the instantaneous jump ended up being more comfortable it seems to be that the brain is willing to accept a mismatch between your vestibular sense and your visual sense a very large mismatch for a short amount of time it seems to discard as a glitch like oh that's just a hiccup forget it but if it's a sustained mismatch that's smaller and happens over a longer period of time then it starts to be uncomfortable so I found that very interesting you know if you can point and click and teleport where you'd like to go people in your space here are very comfortable with teleporting that's just great if I could teleport in the real world I might do that but there's going to be a kind of tension between people wanting realism they want to feel like they're in the real world and we want to make experiences comfortable so striking the right balance is going to be very interesting so there's all kinds of things you can do to try to improve locomotion and there are researchers that work on this in the context of virtual reality but I just want to say that you know we're getting into this in so many different kinds of ways and you know I find this a fascinating area of research to try to make it comfortable immersive believable you feel presence but it's okay to do things a little bit differently I think and I think you know and it's important it's interesting to figure out what these different modes are going to be so that you're quite comfortable moving your avatar around in virtual reality and it doesn't break the presence for you and it's comfortable at the same time because there's no need to drive people through long-vection experiences unless that's just what they signed up for if you want to go for a virtual roller coaster ride go for it speaking of virtual roller coaster there's an example that I like to give so there's a lot of people experiencing virtual roller coasters and amusement park rides while sitting in a chair one challenge I've put out to people is to is to is to try to build a real roller coaster experience where on the roller coaster itself you're just sitting in a chair right so the VR experience is to be sitting stationary on a chair or on a couch while you're in real life moving on a roller coaster or a motion platform so no one's taking me up on that yet but let's see in a in a live audience I get lots of chuckles but I don't have the audio channel on but I don't know if I'm making good jokes or not let me go to the next slide just in case I'm not what about tracking your hands right so of course people want to increase the amount of presence and feeling that you have in virtual reality and so here's an image up at the top here of my slide from SoftKinetic showing at CES this year so you know people are attaching RGBD cameras to the rift or they're putting them in the environment and it's wonderful you can bring your hands into the environment I think there's still a lot of performance and reliability issues in this space of course I suppose it will continue to improve but I should point out something that's very interesting that I find very interesting which is that what happens when we get lazy right so take the Wii remote for example so there's a couple of pictures here one is an image of folks in a retirement home using the Wii remote to get exercise in a bowling experience and the other image I found which is a comedian called lazy Wii guy which is a guy who sits on a couch and eating pizza in one of the videos and putting the Wii remote between his toes and showing how he can have effective use of the Wii remote by actuating as little of his body as possible and so I think that again for comfort and long term use we're going to tend towards very simple kinds of motions and at least a very fascinating question which is what are the kind of motor programs that are easily learnable in other words I'll give a simple example here take the mouse right who would have thought that before ever trying anything like this that you can move a mouse around on the table and then when you see the pointer moving around on the screen I don't know it sort of feels like our fingers pointing around on the screen but it's a lot more comfortable than holding your arm up in the air in human factors this is called the gorilla arms problem you're moving your arm around and you get tired after a few minutes so with this very simple motion I feel a lot like I'm pointing around on the screen and it doesn't really seem to affect my sense of doing that and so I wonder you know can I make very simple motions with my hands and my fingers in some kind of way without having to really move my whole arms around and my brain very quickly can handle that motor program some very simple kinds of motions can I just feel or believe like my arms are moving around just fine I can learn how to grasp things and I'm comfortable with that so these are some things I really wonder about from a human factors user interfaces design perspective what are kind of the lazy interfaces that we can make that that work quite well or quite believable and maybe have very high levels of reliability and low amounts of let's say technology demands so I think that's very interesting all right um another interesting problem the face to face problems so of course one of the first things you think about when I thought about my grandma and her sister and started thinking about developing this technology I thought oh how long will it be before I can get an app together and then I thought oh yeah my grandma and her sister are going to be looking at each other with rectangular bricks strapped to their faces that's not good and of course many many people talk about this all the time it's an interesting problem and challenge what to do about your face if people are wearing this and when you think about face to face interaction the competition is going to be video teleconferencing right so take blue jeans or skype or other other platforms for teleconferencing you feel the presence of people there you can guess their mood or how much they're paying attention you can see if someone's looking down and typing at the keyboard which I can't see right now too well while I'm looking out at the audience I don't know who's paying attention or not it's difficult for me to assess so how real should the avatar be? I took some images here from Paul Dubewik who strives to make extremely realistic looking faces that can be animated for current movies what if you want that real time on low-end platforms it's going to be challenging to make interesting useful animations effective animations and of course we have to be aware of the uncanny valley which I was quite delighted to see that term being used in virtual reality I was familiar with it from robotics because we're familiar with that as we try to make humanoid robots people tried to make them looking more and more realistic especially in East Asian robotics research and they become you know kind of creepy and the same thing happens in virtual reality as well so I find it really fascinating again to think about I guess a kind of minimalism what are the aspects of human emotion and attention that need to be communicated in a social setting so that we feel like it's really effective and it becomes a great replacement for live face to face interaction and will that involve capturing the motions of your face the directions your eyes are looking I think eye tracking ends up being a very interesting problem you can put cameras inside of a headset try to track things that's going to capture some amounts of your motion you can capture your eye motions but your facial motions your facial expressions are certainly going to be altered in some kind of way you can't capture all of that while you're wearing something on your face so as the devices shrink over time that make it better if virtual reality turns into some small eyeglasses that you put on then it'll be a lot easier maybe and more comfortable maybe you can judge people's emotions better that way so I find this a fascinating topic of research I'm sure something that all of you as you work in this open simulator platform find interesting I'm sure you have things to say about that there is always the question of photorealism in general and I think this open simulator platform and Second Life experiences are great beautiful virtual environments and it's a great example of how virtual reality doesn't have to be trying to simply replace reality there are going to be a lot of applications where people do want to do virtual traveling to feel like they're present in some familiar place or to visit new places and feel like they've actually been there so that maybe two weeks later they can't really remember very well whether they were there actually in person or whether they were there through some kind of let's say robotic avatar or robot surrogate that lets them feel like they were there so as a roboticist I find it really fascinating to mix mobile robots they could be rolling through buildings they could be flying through the air which again leads to greater challenges in comfort of the experience people are already attaching cameras go pros and omnidirectional cameras to quadrotors humanoids there's a humanoid project that's joint between the national labs of France and Japan where it's been going on for a decade where researchers have already connected brain machine interfaces to the humanoid robots and brain machine interfaces you know it's an up and coming area it's very hard to make them reliable but nevertheless it's a cool and exciting area I mean a lot of interesting possibilities could come out of it last I heard they've connected the oculus rift to the humanoid and the brain machine interface so imagine you could look down at your humanoid body and imagine where you want it to go and it goes there and you get to look around while wearing a virtual reality headset so that's pretty cool it's going to be hard to make that all reliably but wow pretty exciting let me conclude and then I'll be happy to take questions so the hardware is rapidly improving and we're doing the best we can and again mainly to raise that baseline of human comfort and safety and you know especially comfort over long experiences and of course we always want to increase the levels of immersion and presence but again I want to temper that with what exactly do you need for the task that you're trying to solve so we have greater realism and better comfort but again think about your particular favorite experience that should be and I said the wheel there is I guess real enough for the gerbil running along and we should think about for the particular tasks and the particular types of interactions we want to have if they're interactions with people or if they're a lone experience as we think about these what level of realism is manageable using the technology is comfortable and really captures the pieces that we need to make the experience effective thank you very much thank you so much Steve it was wonderful and very very interesting certainly my first experience with the RIFT some of the things that you described I've gotten tangled up in my cable and at one point I think I was looking at a model of a table that was so realistic I wanted to look at the underside and banged my head and the RIFT on a real table in the process so it's definitely worth making sure that you're in a safe space when you and sitting when you try the RIFT so we have quite a few questions and I'm just going to head off at the past because I got probably 20 people asking all the same thing when will the consumer RIFT be out well that's of course the most common question and all I can say is about the same remarks that our CEO Brandon Arrib has made in the last few days which is that you know there's GearVR which involves our partnership with Samsung for a smartphone based VR experience and that's to be released very very soon within a month or two and the full blown consumer RIFT is on the let's say many months scale and that's about all we can really say at this point which I that's okay I understand but I figured I would at least throw the question out there so that people could hear what you thought about that and you know in terms of the user experience with the RIFT one of the most common feedback pieces that I hear when I'm demoing it at the University of Cincinnati people immediately look down to see their hands or their body experience of being body lists can be very sort of disorienting do you guys have plans to integrate the hand tracking into the RIFT itself? I think it's pretty mature for me to talk about particular particular plans that involve the platform but I'll say that there's certainly a lot of interest in general as far as R&D goes you know we're looking at different possibilities both in Oculus R&D you know as a researcher as I talk to researchers all around the University community and industry as well in general you know it's a very important topic there's one thing that I want to say about it that's interesting and I meant to point out during my talk is that one of the first things I did when I discovered the importance of just personally discovered the importance of perceptual psychology and human factors part of it was to hire a couple of perceptual psychologists so I worked with them and one of the things I learned from them very quickly was that the minimal kind of representation of your body ends up going a long way like for example in a cockpit game we just rendered a simple torso and just having that felt a lot better and there's some literature that supports that in the vision science community and so I found that was interesting so again I always try to as a researcher and I said this a lot in robotics I always try to warn between confusing something that's necessary with something that's sufficient so it may be sufficient to track everything and bring that all into the virtual world and match it perfectly very very hard to achieve that and that may be sufficient but it's interesting to understand the necessary goals when can very simple substitute so if my arm tracking is not perfect but it's off by a bit your brain will learn to compensate for that so it's very interesting however if all of a sudden your arm looks like it's broken at the elbow that's very disturbing and it'll be kind of shocking for you so it's very interesting to figure out what are the kind of minimal performance criteria that the necessary parts you have to get right to make it comfortable and we'll all dream of the sufficient criteria which is to make everything perfect but it's very interesting to find the necessary pieces in terms of you know the user experience again we're getting lots of questions about those Layers a good one so it seems like many of the demos right now for the Rift are focused on single player sorts of Unity 3D type applications you know in terms of the multi-user experience which of course for those of us involved in Open Simulator or Second Life or similar platforms like that it's being in places with others that's so compelling do you have any thoughts on designing or developing for the Rift in a multi-user environment? what's an excellent question I'm certainly as I said some of the motivation in my talk I'm very interested in allowing this technology to connect people to connect people all over the world it's very exciting to me to be able to do that I personally love traveling I love culture and I love the idea of pushing further in this area when I think about the technical challenges I feel like we're overwhelmed in many ways with the technical challenges of just getting single person experiences working well and being comfortable latency of course is one of the biggest issues and when you now connect people over a medium when you connect people over the internet and you want to have networked experiences it's going to involve cleverness in hiding or dealing with the problems of latency you can just demand that the latency problems should go away by better technology and the internet will continue to improve and people are going to do research on that and try to improve it at all levels improve bandwidth and transmission rates but that ends up being just another set of challenges and I don't think there's any shortage of enthusiasm for this we're still in many ways even though VR has been around for a long time this rebirth of VR is still in its earlier stages where if we try to tackle things that involve even greater sets of challenges it may turn people off too quickly if it's not a comfortable experience so that scares me a bunch with the extra latency and some of the additional challenges so that's just one thing I'll warn about I guess for people wanting to get into this space I'm going to heed my warnings about the human side of it and to make sure that whatever we're building it's comfortable so if walking your avatar along in open simulator is comfortable on the screen I don't think it will be very much in the VR case so think about how to adapt that so just kind of go through things systematically is what I would encourage developers to do and try to assess the comfort of these things rather than just saying the code's already there I can jump in and hack this in a few hours that's definitely true some of the environments that my team has developed at the University there are so many issues with even things like scale an environment that I've looked at on the 2D screen a million times and everything looks fine and then you get in with the Rift and you realize geez that refrigerator was built for a giant so lots of issues with making sure that the actual experience the person has with the Rift if you have the opportunity later today Steve to walk around this conference venue with the Rift I'll be interested to see what you think and we may also have giant couches or something I don't know that would be fun to try it is so in terms of people are often well I'll phrase it this way in my experience when I'm introducing the topics of virtual reality or virtual worlds to people who are unfamiliar there seem to be almost two camps that people instinctively either are excited and curious and kind of want to dive in and then there's another sort of group of people who are very I have a very strong sort of fear response they're not sure what it is and they seem afraid to even even try it do you have a similar experience when you're talking about this technology with others and what what do you say to those who are a little afraid of even trying VR that's very interesting I think I'm an unusual person in that I'm well there's plenty of people like me perhaps but I tend to be surrounded by a lot of technology oriented people so and spend most of my time inside the company hacking code like crazy you know working on the nitty gritty things and so anybody that would come by and visit was already pretty excited and wanted to try things so I probably don't have a very good statistical sampling sense a good sense of what the audience potential audience is like it's interesting though I am old enough to remember people fearing computers in the 80s and I was a teenager excited about hacking on home PCs like the TI-994A and Commodore 64 and stuff like that and I do remember people fearing computers in some way and as a roboticist I have a lot definitely been aware of people fearing robotics and well aside from maybe a robot revolution where they might take over the world and kill us off probably not too realistic of a fear but people do have maybe practical fears about robots replacing their jobs and things like that and so that's very interesting that you asked a question because I haven't come up with a lot of that in virtual reality I think that I think it is reasonable to have a little bit of fear about the comfort levels right because it's very interesting for someone to hook up the headset and give somebody a horrible experience in vection in a carefree way and if someone is a novice at it you might say I'll use the keypad and I'll move you around in Tuscany don't worry I'll take care of that for you and if you're not controlling the motion yourself then it's even worse right and so that's one thing that I guess for anybody who is trying to introduce it to family, friends, other folks that are far away from this technology at least make sure that you're easing them in gently into something comfortable like VR cinema or if it's in a basic place try not to do locomotion for a while until they're comfortable. That's a really good tip and I think we learned that the hard way as well you know as much as possible we try to let the user control their own experience because I think it's that unexpected movement that really can intensify the disorientation do you think that you know are there are there other ways that people learn to use the rift or I sort of have had the experience myself where maybe initially it's disorienting but you know after a few minutes it gets better are there things that you can do other than the environment itself or the way the avatar moves to help lessen that disorientation feeling? Well certainly our goal for those of us at Oculus and others in this rising industry we have to make the hardware as comfortable as possible right so that's one sort of part of it in terms of just alternatives to improving locomotion so I think that from a developer side I'm thinking about it so I think that one thing that's also known is that if you give them a task to do so if you focus their attention on something it's kind of like they say if you're seasick look at the horizon or something I don't know if that works or not but it never works for me when I'm getting seasick but if you give people a task to focus on we've noticed that they tend to be less susceptible to the effects so you have a lot of opportunity through the visual stimulation you give them to give the user you have a lot of opportunity to guide them in some way and so if the experience does require some optical flow that's mismatched in some way it may be that the task that you're getting them to do something ridiculous might be like threading a needle but you can't do that at that resolution but say you want to stack some blocks or something or operate some levers or some kind of thing like that focusing on that task might help take the brain's attention away from some of the mismatch so that's one potential for one opportunity there that's a really good idea and a good tip I don't know that I've done that enough and I'm sure there are other developers here that can use that advice too particularly for the first introduction to help sort of ease folks into that yesterday sort of switching gears a little bit yesterday we had Philip Rosdale of High Fidelity I think you mentioned him and that you're a little familiar with his work here talking about this concept of the metaverse of connected worlds when you think about the future of where Oculus VR is going as a company and the technology itself what do you think that that looks like if you project out are we 5 years away or are we 10 years away is it closer than that for people to actually travel between worlds in a very immersed way that's a very good question I guess I tend to think given my decades of experience as a researcher I tend to be very hesitant to make predictions because I know that R&D is full of surprises all the time and as far as what advances on the technology side at what rate will the data transmission rates that we have at all levels all the way down to chip level abilities to handle data all the way up to internet at what rate will these things advance how will people be motivated to have that how will the transfer rates increase in people's homes some of these will happen for various reasons that are not purely technology limitations politics and business reasons and other kinds of things so there's a lot of factors that make it difficult to predict I think I think that it's another way to look at it too is that it may not be too hard to make such an experience and such kind of interconnectivity within a couple of years but getting the quality level really high and being able to support millions of users doing it all at one time in some kind of shared space that's all interconnected it's going to take an awful long time so it's very interesting I guess because as it gets exciting people will start going into it and then they'll clog up the bandwidth and it may cause people to back off from it quite a bit right so that makes it really hard for me to speculate sorry if I didn't answer it too well that's a fair answer I think that's a very fair answer you know one of the questions that drives me crazy that I get asked often is what's the killer app for virtual reality what's the use case here because I for me anyway I often think what's the use case for real life all the things that we do in real life those are the use cases for virtual reality but that isn't always the case there are some things that VR does very well and other things that maybe not so much do you when you think of the best use cases that you've seen so far with the Rift can you give us maybe some other examples what's the best sort of things that you've seen done with the Rift that's a very interesting question and I continue to be surprised to the point where I will say that the most interesting experience are yet to be discovered and they're most likely discovered by people who don't have a lot of preconceived notions about what virtual reality is supposed to be good for like for example if you're a very experienced developer of first person shooter games you may have a preconceived notion that this is your background and experience but that pulls you a certain way if you're very experienced with Second Life and Open Simulator, again we'll bias you in a certain way but I continue to be fascinated by people who just especially younger people or people who are new to this area altogether they come up with something very surprising and innovative one of the things that really struck me as an example was virtual reality cinema so it was made just a few weeks after our Unity, Unity's made it easy for the middlewares made it really great for people to make experiences quickly which lowering the barrier to entry made this happen fast so then a few weeks we found this app of virtual cinema which was made by a young man in South Korea and he now works in the company and has continued to refine and advance and develop virtual reality cinema but when I first heard it I thought oh that sounds kind of lame what you sit in a virtual movie theater and you watch movies why would anybody do that but then I tried it and I tried it my sons I tried it on them as well and that was the only experience that they really sat there comfortably for a long time watching movies and feeling kind of relaxed and it was really interesting to me because first of all it leverages all of the media that's already out there so there's all kinds of movies out there and you don't have to do anything to them to just show them in 2D or 3D on a virtual movie theater so you're leveraging a bunch of content that's already there and you're providing a very relaxing experience and later after talking to a lot of people from East Asia especially ones that live in crowded cities like Tokyo, Seoul, Shanghai they have very limited space and so the ability to just relax in a big movie theater or to build a big virtual mansion for themselves to just hang out in maybe meet a friend or two ends up being really a great experience that we were trying to increase the intensity here in the US of the experiences and so someone going making this kind of basic realization experience it was a reflection of their culture of the kind of things they wanted to do with it and I found that really fascinating so I expect to see lots more stuff like that and it's especially interesting in this case as I said that it leverages the existing technology so if you want to make then a virtual movie theater experience where maybe you're watching say the old movie Twister where there's tornadoes on the screen but then has a little kind of fun gag the theater rips apart in the middle of one of the tornado scenes or you're watching a zombie movie and then you turn around and oh my god there's zombies coming after me it's great the things you can do without even disrupting the content that's on the screen so you can just add these experiences as annotations that's fantastic and we often talk about that even in the 2D virtual environment that I've been working in this space for more than a decade now and often the most surprising limitation is at the moment our own imaginations it's sometimes hard to even fathom all the things that you could do in virtual reality so I think I agree with you in the sense I think that the best experiences are really yet to come and I can't wait to see them imagine what Hollywood's going to do it's going to be incredible it will be so those are the sort of positive sort of hopeful things I always like to ask what are the things of the Rift and the technology and the use of it what are the things that keep you up at night what are you worried about what are the biggest challenges that you think that we need to overcome in order to really successfully bring this to a mainstream audience what are you worried about I think this is a very good question and I think my talk reflected it to a large extent which is the human factors part is what I worry about the most that we're running in this is a great consequence of lowering the barrier to entry the proponents of open source software lower the barrier to entry people are doing all these things it's great so if you can get cheap hardware people jump in and so making sure that the human comfort side is respected and the technology the hardware technology that we develop continues to improve that but also the content that people make if all that goes along it's great because of course the thing that keeps up a lot of people who have been dreaming about VR for a very long time especially people who are into it back in the 80's and 90's is that here we go again are we setting up for another failure so if it were to fail it would most likely be because these issues of human comfort, human factors were not taken seriously enough I think these are addressable problems addressable issues but if we're falling behind on them too quickly while all kinds of uncomfortable content is made in a very fast way then we might be overwhelmed and that may give the whole as people have their first tastes of virtual reality if that ends up being uncomfortable then I'm afraid that that might doom the whole movement which would be a bummer because the technology is there and we do scientifically understand the causes for discomfort it's just a matter of getting it to the right people especially to the excited developers and enthusiasts at all levels that are giving people their first experiences which is a good question you know if you are for all the folks here in the audience or watching on the web stream or who may watch this later is there any good resource that you know of you know is there some place on the web that developers should be sure to check for the best sort of design ideas and tips and tricks are there any good resources that you're aware of that people should be checking to make sure they're designing good experiences that's a very good question we took that seriously within the company and worked hard on a best practices guide and looked for the Oculus best practices guide and download that and look through it that was worked on by a number of people including experienced game developers and perceptual psychologists myself several other people contributed to that so that's very important there's also looking around on the on the forums and we have a developer forum in Oculus and of course there's forums around around the world just you know I just try to emphasize you'll be careful there's a lot of noise out there just like when you do any kind of search you know there's there's um you'll find all kinds of opposing views conflicted opinions and things and so I tend to gravitate more towards scientifically scientific published articles however it's hard to get a lot of those given the latest wave of technology so one of the most interesting questions is if a study was done in the 90s say about virtual reality what aspects of that will almost certainly remain true today using the current technology and which parts of it need to be redone or need to be investigated and so that's a very interesting kind of question I think a lot of people who work in human factors vision science virtual reality graphics are asking questions about you know exactly this what what what do we really need to study right now to to improve our understanding to make sure that we don't don't jump into any pitfalls thank you that's that's a really great tip and I did I did paste that link into chat and we'll be sure to send that out through our Twitter stream as well for developers so we're coming to the end of our time here and as sort of a closing question this is the second annual open simulator conference and knock on wood we'll be back again this time next year you know thinking just in a much more shorter term I know that it's hard to get scientists to make predictions but you know if you imagine you know a year from now where do you hope that that will be in terms of VR and the Rift you know what do you think is reasonable without you know getting into too much detail what do you think is reasonable that we can expect to be to see next time this this time around hmm well I I I don't it's very hard to say what's reasonable but I but I think that you know I I would be very excited and I think it's within reach to have hundreds of thousands maybe millions of people having VR experiences lots of devices out there in a year should be possible as I said we have I guess a lot of that will hinge on the success or failure of the of the gear VR launch that we have coming up so so I'm hoping that will put VR into many many more people's hands and give people a lot of interesting basic experiences so so you know a year from now I think we'll still be on that that rising cost things will keep going up and awareness will continue to increase and hopefully the demand for VR content will continue to be on the rise that's wonderful and and I think the gear VR that can you explain for if for the folks that may not know that's with the with a mobile device that's correct yes it's with one of Samsung's latest mobile devices and then it's an extra piece that you buy if you if you if you have a mobile device and it's an extra piece that you buy that contains some extra sensor technology and all together there's been software that's been worked on collaboratively between Oculus and Samsung that sort of makes it all work so and in addition there's also I guess optics as well because it's an extra device it's a device that you insert the smartphone into well that's really terrific and thank you so much for coming today Steve this was really a wonderful for those who may not know an introduction to some of the more technical challenges but also lots of great information for the developers in the audience I think many people here are hoping that we can help provide some of the content that people will experience through the RIFT so thank you so much for being here today it was really a pleasure to have you well thank you very much for the invitation and good luck to everybody out there it's a these are very exciting times for sure indeed thank you so much and thank you to our audience for many terrific questions just as a reminder so we're almost at the end of the keynote session we have a 30 minute coffee break for those who need to get up and stretch your legs a bit and then we'll be returning with the first round of breakout sessions those sessions will be held in the breakout tracks so if you open your map here in world you'll notice that the breakout regions are just south of the keynotes where we are now there'll be tracks in the business community there will be training courses there'll be training courses for education and learning lab and you can teleport to those regions either from your in-world map or the maps in the region here you can also be sure to check out the rest of the day's events on the conference program at conference.opensimulator.org and be sure to tweet your questions comments and feedback we'll see you in about 30 minutes thanks everyone