 Now, talk about is um the process of or interference between let us say virtual reality and eye motions right. So, what happens at this kind of interface? What are some of the problems that arise? So, regarding virtual reality and eye motions. So, let me just list off all of the problems I can think about now that we have enough understanding. So, number one is um I have the screen like I drew before I have the lens in front of that that is I have the lens in front of that I have the eye before that and um the pupil though and let me just draw the cornea part here may be aimed at the top part right. So, that the so, we are looking at let us say some pixels that are on the top part of the screen here. So, that means that you are not looking directly through the optical center the central axis here of the optical system um. So, even if you put the lens right. So, maybe you are you know when you put on the headset the lens might be perfectly centered over your eye, but when you look out of the sides you are looking out of the extreme parts of the lens right close to the boundary of the lens. If you remember the aberrations that we talked about couple of lectures ago the problems are greater at the periphery. So, now you have greater chromatic aberration to deal with you have greater optical distortions we have this um pin cushion distortion to deal with um there may be issues of the focal plane changing a bit due to spherical aberrations maybe you can compensate for that with your eyes, but probably not if it is already at infinity you would have to move it. So, that your eyes have to do the extra amount of work to converge if you already if the distortions around the periphery due to spherical aberrations cause the rays to be a little bit converged then it is already going to converge before your retina and there is nothing your lens can do to fix it in your eye. So, so these are the kind of problems you could have um by looking out of the side. So, there are problems as I said of focal plane or just optical distortion. If there is distortion I could try to compensate for the optical distortion in software, but the distortion I get could be a function of where I am pointing my eye directly. So, um it is very hard to compensate for that because the pupil is translating back and forth along a circular arc as it is um trying to look at different places here. So, that means that in in principle what I would like to have is some kind of software based distortion compensation function where I calculate a barrel distortion as I mentioned couple of lectures ago and I would like to have that be dependent on where exactly the pupil is at, but if I do not have an eye tracker then I do not know where that is. So, I have to calculate a one distortion fits all situations which means that inevitably there is going to be some bad distortion problems with a with a simple lens like this. So, there seems to be um no clear way out of this this problem and if that is the case if you see distortion in your periphery and then you start moving your character around in VR um or you just do simple motions that correspond to your natural head motions and let me say not move the character, but you just do motions in VR where you are perfectly matching your head motions. If there is distortion at the periphery then the accelerations you are perceiving in VR because of the distortion might not match what it would be in the real world and you could get nauseated just from that. So, there is a lot of interesting problems coming in here just from this off axis looking through the lens. So, that is a huge problem very very big annoying problem. Number 2 VOR gain adaptation I already explained that one which again is related to these distortions. So, the vestibular ocular reflex will nicely learn some different gain that compensates for you, but it may cause more fatigue and it may cause you to um to have the wrong gain when you re enter the real world and then cause the world to seem like it is swimming around incorrectly to cause objects to not look stationary. Um 3 there is the complicated interface between the display and the photoreceptors. I think I suggested before if we are going to make the display this close why not we just go all the way in and just send the right signals to the optical nerve right or just you know kind of cut the photoreceptors a bit and just give them whatever they need and directly control that right. So, we do not want to damage the human eye like that. So, we have to deal with this the unfortunate complexities of the output pixels being on the display and the input pixels being along the retina being called photoreceptors and they all have their peculiarities and that causes a lot of difficulty. Um so, for example, um a the pixels on the display switch their color or intensity at some um non-zero rate in other words it is not instantaneous. Um if it is an LCD display it could take up to 20 milliseconds to switch. If you are running at 60 hertz how how long what is the time between frames if you are running at 60 hertz. So, 16.67 milliseconds so it could take longer than an entire frame just to switch switch the pixels. So, if you are using your vestibular ocular reflex and you are trying to perceive something as being stationary you are actually jumping from pixel to pixel the object will be jumping from pixel to pixel the pixels are gradually changing it will cause the object to smear. Um if you use something like an OLED display which has a very fast pixel switching time say on round um 80 microseconds instead of up to 20 milliseconds then you will have less blurring, but you may have other problems which I will eventually get into in the class. It is just something to pay attention to the real world does not have that problem only the display does as in front of your eye. Um there is RGB um sub pixel structure. Some of you are trying this during the break we were taking a close look at displays with magnifying lenses. So, I encourage you to try that on your own time um you do not simply have a simple dots or simple rectangles with RGB there are sub pixel patterns of individual R's G's and B's arranged together. Um sometimes they are in a nice regular pattern sometimes there is some irregularity it is always regularity at some level, but there is some kind of interesting patterns inside it is about all I can say about at this point. So, that has to be dealt with um frames might in other words black at particular times. So, in other words the entire display may be black for some period. So, if you have a slow switching display like an LCD it may never go black you are just always switching between different colors and intensities and it is just barely keeping up let us say. But if you have a very fast switching display for some reasons I will explain when we cover perception of motion um you may want to pulse it on and then turn it off and then pulse it on and then turn it off. So, the display will be black a lot of the time. How does that affect your brain? How does that affect your photoreceptors? Something very interesting we should talk about a CRT displays had this problem as well um because the phosphors would go dark very quickly um D asynchronous um or I will say line by line display scan out. So, displays typically change the pixels in some kind of ordering that is sequential maybe asynchronous maybe a better word there would be sequential. So, it is going along sequentially you would like to just change all the pixels at once instead they are changing in some kind of organized way that looks a lot like the way old CRTs worked by um controlling the electron beam and just scanning it line by line. So, that mentality has been around for a long time. So, the real world does not act like that, but this display in front of your eye does in in most cases. So, needs to be re-engineered um let us see what else is going on E your photoreceptors. So, the input pixels are actually quite slow to respond it takes them about 0.1 to 0.2 ah seconds to respond right. So, is not that interesting. So, hundreds of 100 or 200 milliseconds to respond and finally, all the eye movements especially with micro saccades some eye movements are designed to improve stability some do not seem to exactly do that all of the movements must be causing at least a little bit of shift they cannot be perfect. So, eye movements um and object movements as well um shift the image on the retina while you also have very slow photoreceptor response by some amount it cannot be perfect. So, all of these are going on due to this complicated interaction between um the display and your photoreceptors right. So, that needs to be taken into account I do not think all of these are fully understood right. The more you understand about this the the the better you will do if you get into engineering especially the hardware side computer engineering and hardware side of of virtual reality including the optics all of these come together right. So, groups of people need to work together to solve these kinds of problems who normally do not work together right. So, people who understand the the physiology um neuroscience neurobiology of human vision computer engineering and optical engineering all together in one place here to resolve this interface right. Many different possible designs in here I could replace a screen with a small retinal projector if you like, but you still have many of the same kinds of issues right. Maybe some improve some get worse interesting tradeoffs there. I have one more. So, I am 1 2 3 say one more case this one is really problematic is that virgins is usually in the real world coupled with accommodation. Remember that accommodation means ah refocusing and virgins means convergence or divergence that is these motions that you do for stereo correspondences right. So, in the um in the real world and again this is for younger people um when you bring an object very close your eyes are converging towards it and then they are also refocusing the lens. So, those muscles are tightly correlated tightly coupled almost like when I move all the muscles in my hands together to grasp something I am not really thinking about the individual muscle movements. So, these are very tightly coupled. If I put the display in front of the eye like this and I and I put the lens at the appropriate distance so that the rays coming out of the display um through the lens are parallel right. So, if I if I go ahead and I and I do this then everything on the display is in focus always regardless of my perceived depth right. So, that means that this display is training you this head mounted display is training you to separate virgins from accommodation. So, it is telling your brain no no you do not need to do that anymore. So, if your brain is accustomed to doing that then you may unintentionally blur the objects that are close up because your your brain is trying to focus on them and eventually will learn to separate those. The same thing happens if you are sitting at a in a in a movie theater watching 3D movies it is again designed where everything is in focus even if in stereo it is very close up and that is motivated the movie industry to increase the depth of things that are 3 dimensional in the movies. So, the objects are actually further back typically they are not on your lap they try that for a while and there were a lot of complaints of discomfort um. So, people may feel fatigued from this because your brain is learning a new trick and having to maintain this it is separation of virgins from accommodation. So, the muscles are now getting separated in some way it is like giving you some new degree of freedom that you didn't know you had.