 In this video, we will talk about eye gaze detector. This is another sensor to detect human's eye gaze or eye trackers. So, how does eye trackers work and how it is detecting eye gaze? So, please check this blog and in detail, so let me show the blog. Again, the blog is from iMotions because I mentioned that iMotions is a company which integrates different sensors and they created a core engine which combines the data from different sensors to provide you the combined one single system so that you can do analysis better. So, eye gaze is detected using detecting your eye people and the reflection of where some light or some reflection happens. That direction is used to say which direction you are looking at and your distance from the screen, if you are watching a laptop, that distance from the screen is used to say, are you looking at a which place and what are the location you are looking at and that is how the eye trackers work. So, read this blog, it is interesting. And the typical eye tracker in a portable eye tracker this is a simple eye tracker. Tobi Eye is the company is the leader in eye tracking devices. It is a typical eye tracker small device which can be attached to your laptop screen and then you can start collecting the data. And what happens or there is more and more other eye trackers like you can wearable eye trackers in the glass you can put those eye trackers. This will be told I will show the latest ones from the Tobi Eye. So, what happens is the eye tracking data collects the participants eye gaze data. These are the products by Tobi Eye. Tobi is the company which is pioneer in this field and they make eye trackers devices. The latest one is Tobi Pro Glasses. There is no need to have extra, extra some device or you have to focus on this other thing is gone. Now based on this the new technology they able to detect eye gaze even in you are interacting with the object you are looking at some product or something like that. And this is the Tobi Eye Pro you know the research level eye tracker. Also something called Tobi Eye Nano this is a small and that can be used for the research also. And they have more devices you know. The one Tobi Eye also had one device one device called Tobi Eye 4C. Tobi Eye 4C is developed especially for the gaming purpose. It is very, very low cost in the sense it is for the level of technology they use they do not get much. The Tobi Eye 4C is around say $100 or in India you can get in Amazon after import and everything you just get it for 13 to 15,000 rupees. So in a research it is really you know it is very easy and you can collect say 10 eye trackers and you can collect data. But the purpose of Tobi Eye 4C is for gaming. So you cannot use Tobi Eye 4C for a search purpose that is the declaration says that you cannot copy data, analyze data from the 4C. So yeah people do not use 4C. If you are using 4C talk to Tobi Eye Company and get the license permission again you have to pay huge money for getting license. So for research purpose they have Tobi Eye 4 Pro Fusion or Tobi Eye XR120 kind of devices or Tobi Eye Nano is you know 60th eye tracker that is really good and that is not really costly it might cost 2 to 3 lakhs so that it is easy for you know people to start buying and making some research out of it you know do collect data and create some such proposal. So Tobi Eye eye trackers we are talking about collects data at a particular frequency. So let us see. There are other eye trackers which is not based on you know external device just based on the you know webcam. Commercial ones such as Tobi Eye Sticky, Tobi Eye is also you know giving things called sticky it is webcam based detector anybody can use it and other one is called Real Eye. Real Eye is very interesting one. Let us see what is Real Eye. So Real Eye is a screen based webcam eye tracking you can give a try it for free just launch demo and actually detects the you know where the student is looking at based on the emotions. And after that it gives the analysis of data it is not just you know detection also. It is not much costly compared to Tobi Eye for the search purpose. But there is a one more is from the Brown University it is called a WebGaser it is also free it is open source so use it. The challenge is when you test all of them you know it needs a proper lighting it is that the students is interacting with the computer laptop. It is lighting should be in our face so that we can detect the light from the screen and the pupil direction has to be detected to detect where on the screen the student is looking at it. And it is very sensitive. It is you cannot detect you know if I look at a particular place of looking at a W it will not be correct no it is not so accurate like a Tobi Eye detects like the screen you know the external eye trackers. The reason is webcam is on top for example if you are looking at laptop your webcam is on the top. If I am reading the you know bottom off of the page my eyelids are kind of closed I am reading so the webcam may not be able to detect. So why I would say that go ahead and try real-aid.io or demo and start looking at the you know bottom side of this page you will see the accuracy is not so correct. But if you are conducting a research where the area of interest is large this webcam based eye trackers will be beneficial. The reason is buying a no research grade eye tracker from Tobi Eye and conducting a study in a real classroom for 30 students is not possible even if you have a lot of money. So go when I say AOI suppose this is my screen the computer based learning on the screen and maybe this is the menu and there is a big picture and there is some text. In your research if an area of interest is very big such as you want to capture how much time a student spent on menu how much time a student is looking at picture how much time a student is looking at text that is the thing. And by using this information you want to you know understand students learning then this is perfect you know this webcam based detector might work for you. So check it out and I would not suggest you to go for you know a real eye or web guesser works pretty much same compared to really just how to integrate it and connect data on it. And the some of the feature extraction the analysis is not really tough it is easy I will show how to do that. So there are two you know there are two features key in an eye guess detector the one is called fixation the other one is called circuit. Fixation is you are your eye is fixating on a particular point and how much time you have to fixate the what is that point you know is it a single point or some square bracket it depends on your research question or the way you want to you know decide your duration and the and the bracket. For example if I am fixating on this particular moment this is the one part you know you are fixating on here and I would say anywhere my fixation I am looking at my eyes is looking at anywhere in this box for more than say 50 millisecond or more than say 100 millisecond I call it as a fixation this is a fixation. So when you are leading something actually you are fixating from point to point. So you fixate here then you fixate here then you fixate here then you might fixate here something like that the moment between the fixation right is called saccade. So the transition between fixation is called saccade that is a two key features in the eye guess detector. So you get a raw data from you know eye guess eye detectors. So raw data looks like this you know timestamp and the xy portions let me so raw data looks like this you know just a timestamp and xy portion. So what is xy portion for example in your screen consider this is 0 0 if a screen size is you know 900 x 600 or something then this will be the 600 900 something like that sorry this is 900 x 600 because x is 900 so something like that or you can change where to start 0 0 or how it works is basically it takes a center this is actually a 0 0 from here it actually takes the coordinate value and eye tracker might give you the values in this kind of you know this kind of values 0 0 it can convert it based on the way you want it. So the raw data looks like this you know the timestamp the x portion y portion timestamp x portion that is an exact raw data. If you want you know the orientation like your eye movement also it can also can be added because it is a point of reference from where you are looking at it. So also the important thing is sample rate I mentioned in you know you know some slides above that there is a sample rate 60 hertz or 90 hertz. There are Tobii nano it says 60 hertz. What is 60 hertz means they collect 60 samples per second which means at every 15 milliseconds you will have x y portion right. So the eye guess is so accurate and in a research you know you know in a reading kind of research the sample rate is around 1300 x that is a that is the really recommended version high rate transition rate because are you looking at which word are you looking at you know which letters such a kind of analysis. But I would say 90 hertz to 120 hertz or 60 hertz is better you do not know go for a high end kind of sample rate. If you use a webcam based high tracker that is the webcam based high tracker the sample rate will be 25 hertz right it is not much. So you have to make sure that is also issue there. So I will say what is fixation and circuit this is a feature from Wikimedia the Creative Commons license. So this person actually fixated on this particular locations here first then he fixated at the phase then he fixated here then fixated here fixated here then fixated here then fixated here then oh fixated first here is 0 sorry not in the first place. So this is how the fixation movement happened and the line 1 2 3 4 indicates the sequence and this line is a saccade you know that is a fixation. Also there is a this kind of high trackers give other kind of information that is called heat map heat map in the general meaning heat map that after you looked at look at this particular Wikimedia page after 5 minutes where you spend more time fixating it is not talking about sequence or saccades instead it is giving where you focus more. This is the more focused area more focused here these are likely because this area were not actually looked at at all. So this is very helpful if you make advertisements or some marketing some product where is this participants why is this customers are looking at. So that is what very useful for you know marketing areas. But we can use the sidetracking data also for the learning analytics that is the whole idea of this. So I said we can use high trackers for the learning analytics how can we use that for learning analytics. So write down your answers after that I assume the video to continue. So you can detect learners cognition whether they understood things or not the reading skills or the focused attention, engagement, the performance lot of things can be detected from the eye tracking data. I will show one example. So in this paper we predicting learning by analyzing eye gaze data of reading behavior from EDM 2018 paper. In this paper they used a system called Betty's brain open-ended learning environment. What is Betty's brain and just give a very brief about Betty's brain. If you are interested go and check the page called Betty's brain from Vanderbilt University by Professor Goudam this was they have all those Betty's brain system in a web is there. Betty's brain as you know this is the agent Betty. The students role is to teach this particular agent to solve a particular problem this Betty. Initially Betty's brain is plain there is nothing in it and you can ask your teacher there is a master Mr. Davis you can ask him whether we need any help with that. But your goal is to teach Betty go and do that. So in order to teach you have to understand some climate change or some science concepts. So all the reading materials is given here and there is a now there is a content a student has to read it and they can take notes and after taking notes or after reading it their goal is to go and create a concept map. This face is called Betty's brain initially it will be empty then you have to create a concept map this is a concept and this is in a relation between these two concepts the causal relationship and it increases decrease if deforestation increases vegetation will reduce decrease. And this marking read and all is you can mark it is not the system marking it right on you can tick mark it as a correct or you can tick mark it if I do not know something like that. This marking is also based on your you know taking the quiz and based on the answers you can think of it. Then you can ask Betty after you create the map you can ask Betty to take an exam. When the Betty takes exam you can take exam on multiple sections or whole chapter multiple subsections and when the answers are correct it is marked here and if you select it which particular path is used to answer this particular question is given here. So you know that these two path is correct so then you can mark them in a green mark that is how the student marked here. So the idea is that a student is teaching Betty to do something when the student is not able to do that it is not the student is failing the Betty is failing that is how the feeling goes on the emotion is actually changed. So this is a Betty's brain you know in the system we have area of interest in each text I am just going to put the same title here because it is going to be hiding it. So when a student is reading a particular line when this heat is reflected back to the earth it is reabsorbed by the earth surface and becomes absorbed heat energy. So there are two concepts involved heat reflected to back earth and absorbed heat energy. There are two concepts and there is a relationship if there is a more heat reflected back to earth there will be more absorbed heat energy that is how two concepts relations. Consider I am creating you know relationship between these two and this heat reflected to earth and absorbed heat energy if it increases absorbed heat energy also will increase this is the causal relationship between these two concepts. So what I did I took the science book and I marked each and every sentence here in this whole chapter like some 7, 8 pages. I marked everything mapped to a particular concept particular concepts and particular causal relationships there are only 25 in it. So I mapped it then I created a AOI my AOI is not very minute but not very big also so my AOI is this something like this or my AOI will be like this. So whenever a student is looking at this page and if the student looks at this particular AOI and if he converts his knowledge to concept map it makes the map correctly I would say that the student understood what he is reading. If the student spent here still is not able to you know convert that into a you know correct causal relationship then the student did not understand here. So using this data we created you know a simple classifier and we use the features from the eye gaze data like a fixation count, fixation count on the AOI, average fixation on that particular AOI and saccade angles, saccade amplitude kind of more extended features from the fixation and saccade only. By using this you know by using this eye gaze features from the eye trackers on the exact AOI and the corresponding concept map or corresponding relation between two concepts. We label them and we try to predict the student's ability to make the concept correct just by the eye gaze information. So eye gaze data and read actions and performance of subsequent edit action were used to label data to train the validate classifiers. This is a simple idea that we label data, we use trend fold class validation, train classification and unlabeled data is given to the learn classifier, the classified pages for read. Then we used to linear regression to model the student's ability to classify the information whether they are able to pass it or they are able to create a concept map correctly or not just based on the eye gaze data. And we got a good accuracy say 80% but Kogan's capacity is really good. It indicates that just by looking at students fixation and saccades on particular AOI we can tell whether the student understood the concept or not at 80% accurately. That is what this whole exercise is about. So what I am saying is if you use eye tracker, webcam, webcam based eye tracker or any eye tracker and you can actually predict where the student is learning. Is the student learning, is he learning really learning? It is just simply reading, is reading is converted to a knowledge or not. These kind of information can be obtained and can be predicted easily using eye trackers. So yeah in this slide we saw what is eye gaze deduction and why eye trackers is used in NDA and I also give one example. In this week we saw that I tried to motivate you there will be data not just from log data or the human observer. We can have a data about the student's facial expressions, data collected from different sensors like eye gaze detector or GSR or other data and we can combine this data to provide complete model of the learner like holistic approach. The challenge in multi-modal learning analytics is combining data and providing the useful information to students. That is still a challenge and it is a lot of people working on that. And I hope I motivated you on a field called multi-modal learning analytics. As I mentioned this is not a one week lecture. It is in its most of a semester course to teach multi-modal learning analytics and play with the data. So thank you. Next week there is a final week we will talk not any topics like we will talk about next steps in learning analytics. Thank you.