 So, again switching significantly now in topics they are obviously, all interrelated for the development of virtual reality systems. I want to talk about tracking systems in other words system is designed for tracking or estimation if you like of the motions that are occurring in the physical world that need to be somehow brought into or taken into account in the virtual world. So, what do we want to track or what do we need to track? Some things are optional some things seem absolutely necessary. Remember our rigid bodies we talked about and we talked about transformations of rigid bodies. So, we are going to be estimating some transformation that corresponds to how much translation and rotation has occurred in some body of interest since the beginning. So, we think about rigid bodies the number one case that we care about is the head that is wearing the head mounted display need to track that. Even in a cave like virtual reality system where you have projection screens all around if you want to change the perspective and provide stereo to the user then you have to track their head in that scenario as well. So, even beyond head mounted displays may still need to do head tracking in this case. So, this is very important because it provides the correct viewpoint that is the part that is most sensitive I would say to get right because that is what is going to your retina ultimately and small imperfections in that as we have been saying can cause great discomfort or disbelief like maybe it will not even look right at all. So, that one is very critical. Two we may be able to do even better by tracking your eyes we could figure out then which way your fovea is pointed and provide higher quality images there and not worry about the rest. It will be like some kind of automatic compression that is built into your graphical rendering system that would be wonderful if you can track accurately enough and with low enough latency low enough delays then that would be great. Three perhaps you want to track the problems of your hands right maybe you are grabbing on to controllers and you want to move around track your hands through space. If you are off by a little bit probably not going to be nauseating if you are off by a little bit for head tracking it could be very nauseating. So, it is a different scenario how much do you need are you trying to do a tele surgery through a virtual reality interface then accurate hand tracking could be critical. If you are just trying to make some gestures to your friends maybe it is not very critical right depends on what you are trying to do you have to think very carefully about the tasks here. Number one it is critical to be very accurate here and have low latency, but for the other ones even for eyes if you are not doing foveated rendering, but maybe you just want to keep track roughly of where people are looking for some kind of social interaction in VR. Some course tracking with a significant amount of latency might not cause much trouble just if you want to know where people are looking in VR that may be fine. So, as you go down you know the performance requirements may vary significantly depending on your task. Maybe I want to track your fingers very precisely maybe we will just add in the entire body maybe there are movable objects. For example, I might have a coffee mug in the physical world I might want to keep track of that where it is at so that I can perceive it in the virtual world. I might even want to reproduce my desk if I am sitting at a desk in the virtual world so that I know whether or not I am knocking my virtual coffee cup off of the virtual desk because that will correspond to knocking it off of the real desk in the real world. So, maybe I want to bring some objects that are around me from the physical world into the virtual world. How do I do that? Do I want to put special bar codes or some kind of easily identifiable markers on them QR tags and things like that on my objects? Do I want to use peer computer vision techniques to try to bring them in hopefully in a reliable way into my virtual world? So, keeping track of movable objects that are around me might become important. And finally, other people in the space people maybe pets could be anything or you have this moving around autonomously that you want to keep track of could even be your vacuum cleaning robot it does not matter to me, but there are other things you might want to track and that may become relevant to your VR system. It could be the other people you are tracking are in fact in a shared virtual reality space with you and they will collide in the real world in addition to the virtual world. So, you need to keep track of them as well. So, there needs to be information shared for a system like that. So, that gives you a bunch of different possible bodies to track and sometimes these bodies are all attached together like the head and the eyes and palms your hands and your fingers your whole bodies attached together. There are kinematic models you could make and utilize to improve tracking methods. So, something to think about it is not necessarily just a bunch of detached rigid bodies. Now for each rigid body we want to estimate the rotation plus translation probably should not say plus it probably should just say and do not want to make it look like I am doing some bad mathematical operation alright. Well remember that for rotation we had a rotation matrix R 3 by 3 or we used quaternions which we hopefully agreed on was perhaps the best representation to be using and for translation that is rotation and for translation we used a 3D vector xt yt zt which using our homogeneous matrix representation we had 3 by 3 matrix and xt yt zt and finally completing the 4 by 4 matrix like that. So, we are trying to estimate a matrix of that form over time and the initial conditions sometimes end up being important like for example, as soon as we start up the tracking system you put on a headset which way are you looking this often ends up being difficult. If there is no camera suppose you just have a perfectly portable system with no wires attached to it at all. When I start up the tracking system is the direction you are looking the direction that you want to call the identity or maybe you just have it sitting on the table lying in some strange orientation and you put it on your head and you are like oh no why do I have to turn my head around to face the front right. So, there is always a start up condition here that is a little bit complex I guess. If you have some fixed reference point like a camera that you have set up and you want to say ok the starting identity transformation is going to be 1 meter in front of the camera facing the camera then you can figure out all transformations relative to that. If you do not have a fixed global reference then sometimes you have to just assume the user is careful when they start up the program or you may want to give them a reset key. So, they can reset their position and orientation based on where they are looking at right now based on where the heads positioned right now. Does that make sense? There is always this initial condition problem and as you have tried different applications in VR you may have experienced some of these kinds of frustrations with the startup conditions.