 I want to now give a kind of bird's eye view of the subjects of the rest of the course ah after giving a historical perspective. So, I will stop this part um I guess I just leave Palmer's picture there for the time I do not have another ah place holder picture. So, I want to give a what I call a bird's eye view of the main topics in this course and then we will start jumping into the details. I believe my bird's eye view will fill the rest of today and then we will be ready to go further in detail I do not know hypnosis part of VR that is very good um maybe that can be a good class project. So, ok birds eye view. So, I first suggest um picking a sensor on your body. So, pick a sensor and I think another word for this could be sense you know I have an engineering background. So, for devices we make we call them sensors if it is on your human body then people will call it a sense or a sense organ to refer to the particular part that is doing the sensing. If it is the whole system together going all the way into your brain then maybe it is called the sense. So, um I may use the word sensor and sense interchangeably it is hard because I have a lot of engineering background. So, it is fine um pick a sensor on or on in your body. So, eye ear perhaps your skin somewhere like on your fingertip um we want to involve taste we can have tongue we can have nose for smell um what is a sensor anybody have a good engineering definition of a sensor anyone thought of that before a sensor is a is usually considered to be a form of transducer a transducer converts energy from one form into another. And a sensor is usually considered to be a transducer that converts whatever physical energy it is measuring right from the physical world into signals of some kind right. If it is a digital circuit then it will convert it into digital signals. If it is your brain then the sensor should be converted into neural pulses right. So, it is a signal of some kind that goes into the system. So, it is a kind of transducer um. So, one thing to pay a lot of attention to which will be very important for us with these various sensors when we pick one um is that each sensor and this will if you understand this will help with a lot of potential confusion later. So, each sensor moves through a space or changes in some way more generally. So, each sensor moves through space um these changes are controlled by the brain and the central nervous system together. So, controlled by the brain and here central nervous system. And one thing that is important to pay attention to and this is also important in robotics. It is can't help emphasizing this and it is very important to think about is what are the degrees of freedom degrees of freedom for each sensor or a sense um you can abbreviate it DOF people even pronounce this they call them doffs. So, how many doffs does that sensor have how many degrees of freedom does it have and this is a very important thing. And so, each sensor I will squeeze this down here each sensor has a space of configurations. So, a um let us say configurations space. So, in other words the set of all ways to transform or move that sensor or reconfigure that sensor. It is a very important thing to take into account and our bodies are changing the configurations of our sensors all the time without telling us right without being consciously aware of that. So, this is something we have to train ourselves to pay a lot of attention to. If you do that then you will understand these things better. So, let me give some examples is EX for example. So, let us think about your left ear I just want to pick one ear right now to make it clear. So, pick your left ear um if we look at the position part position take my left ear I can go back and forth right I can go forward and backwards and I can go down and up right. So, that is 3 degrees of freedom for position. So, so I will say there is 3 doffs from that. However, I can also rotate my ear in various directions right like this as well. So, there is 3 degrees of freedom from rotation. So, the total is 6 doffs right. So, I have 6 degrees of freedom because let us say it is a rigid body moving through space right. So, as I am walking around here today my left ear is traveling through a 6 dimensional space of configurations. Everyone agree with that? It seems fine. Some animals can bend which way their ears are pointing that would be more degrees of freedom we do not have that in our ears, but our eyes do right. So, that is very interesting. So, if we have another example let us just to give equal weight to both sides we will pick our right eye clearly it does not make much difference. So, it is more complicated right we clearly have the 6 doffs that correspond to head movement head and body movement actually right your whole body can move can cause your head to move around which causes your eye to appear somewhere different. However, I can hold my head fixed and then I can start rotating my left eye around. I cannot do it by itself I guess I am rotating my right eye as well. How many degrees of freedom do I have with this eye as I rotate around? 2 or 3? Roughly 2 let us say. So, you can look up and down and side to side there is a little bit of torsioning or twisting that happens, but it is more or less 2 dimensional. I talked to some neuroscientist about this recently and it is a little bit of a 3rd dimension, but I think 2 is just fine. So, there is rotation of the eye. So, maybe that is 2 doffs. One more interesting thing happens especially if you are younger than about 48 years old then your eye has an additional ability to refocus. So, when you bring things very closely your eye muscles are changing your lens. So, that you can look at things very closely. So, if children show you something they can put it like right up to your face, but you cannot focus on it anymore if you are over 40 years old probably or especially if you are over 50. So, we lose the ability to do that. However, it is one more important thing. So, it refocuses, scientist refer to this as accommodation. So, in engineering we would call that focus, but anyway so it is good to know the word for it. So, the eye can accommodate. So, I guess if I put all of this together maybe this ends up with about 9 doffs. If I consider focus to be a single parameter I get 6 plus 2 plus 1 for the for this. So, I have all of these degrees of freedom and we are barely aware of it right very often right. I want to give one abstract view draw a picture here and then I will finish up this part and then we will take a break. So, most important thing degrees of freedom think about the degrees of freedom of your senses all the time. If you are testing virtual reality systems or developing them in some way this becomes very important and it is easy to ignore because we are trained to ignore it our brain is not causing us to pay attention to that. So, I want to give the kind of abstract view bit simple maybe a bit not too serious, but take the human brain here and we have this sensor or sense sensor another way we may call it the sense organ and it is providing information to the brain through neural impulses and the brain in some way is controlling the configuration. Some of the control may actually bypass the brain and be very low level. An example of that is the vestibular ocular reflex we will let you I will cover very soon I will cover in the next part, but let us just say generically the brain is doing the control and then the sensor is capturing information through transduction from the outside world. So, there is this world out here and this is what I would call the normal situation without doing any kind of virtual reality yet right. And so the question now is what can I put in this virtual no what can I put in the physical world to try to fool the brain. Well I go over here to the sensor and I look at the information coming in or look at the stimuli coming in and I want to replace this with something that is been engineered. So, let us try that. So, it is still the world let me because you are not going to leave the world when you put on a virtual reality headset right. So, that is still there. So, the world is still there and then what I do is I make something called a display for whatever just kind of displays appropriate for this sensor that I picked and it could be touch could be here could be vision. So, I want this display to provide the stimuli to the sensor and then I have to figure out what to put on the display. So, I better make some kind of box here which I will call an alternate world generator. I do not know if I want to call it a virtual world generator because it may be somehow maintaining a representation of an alternate world. It could be a real world somewhere far away some other location it could be recorded delayed by some amount. It could be completely virtual, but some kind of alternate world generator is going to have to be constructed and that is one of the things we will get into and through the process of what is called rendering tiny letters here rendering the process of rendering we take the information constructed by the alternate world generator and it is rendered to a display that provides the sensory stimulation. So, that the brain is fooled. So, that hopefully the picture by the time you are done I think I will just draw it in a different color here. So, what the brain believes by the time you are done is it does not see any of this stuff it just imagines let me see here it just imagines that there is an alternate world and that is all there is to it right. So, the brain does not understand any of this stuff right it is something that we on the outside the scientist can see it if you like right, but if you are putting on the headset you become the laboratory rat. So, you cannot really see the yellow part from the brain from your brains perspective it is just the alternate world that is where you feel present if everything is been done correctly. Questions about that sorry the sensor? No, I mean any human body sense that you like. So, it could be that is right and as I mentioned in the next part we will take a break here in a bit, but in the next part I want you to think about a display as being an engineered device that presents stimulation to your senses right. So, simple example an audio display is called a speaker that is a good question normally we just use display for video we call these things on the laptops here a display this is a display, but I want a speaker to be a display I want you to think very generically like that across all of the senses and constantly make comparisons across the different senses say well why was it important for this sense and not for this other sense these are good kinds of questions to ask and that is the motivation for calling them all displays. Trying to get to the fundamentals here right. So, trying to really think about what is common across all of these very good question any others? One sense other sense are free to feel the word. Yeah that is a good question. So, I have just drawn it for one sensor, but of course I can make an array of sensors here and then an array of displays with rendering and I am going to draw this picture next time I will have an alternate world generator feeding to all of those and then we can say how many senses are enough how many do you need to take over and I will just say well whatever sufficient for your task right whatever is adequate for the task what is your goal right do you want to feel fully immersed like you are hanging out with your friends and socializing and having a good time is that enough fine do you need smell in that case I do not know it just depends right. So, what senses do we need? Vision is the most powerful one by far. Audio probably next most powerful haptic maybe next I do not know what way we want to order these, but I do not think anyone is going to disagree with vision being the most powerful sense that we have and it does require the most neural processing out of any of our senses. So, that is what makes this these current times very exciting because we have felt audio immersion for quite a while now, but visual and then the two of them together is very powerful. So, having haptic or touch feedback as well would be even more powerful and necessary in some tasks depending on what the task is. So, we always have to think about what is the task and what is adequate for that task right if I want to train a doctor to do some kind of stitching let us say on skin and I provide no physical force feedback whatsoever in the simulation it is going to be very hard for them to get it right right they will just have to do it in the physical world only. So, if you want to in that case you need to provide force feedback. If I am going to socialize with my friends probably do not need that right. So, it just really depends on the task. All right other questions?