 Now, lastly learning dialogues, you have seen how to collect data from classroom environment, online environment and teleenvironment. We discussed how to collect this data also like in a classroom environment, you can use the students behavior by human observers or the students interaction with the systems like a module or a test performance or this information. In a second in online environment, you can track all the users behavior, all the click stream data or in the online data has to be collected and we can write a script to convert that raw data into a events or actions. In a teleenvironment is the environment you developed, so you have access to what kind of information you want to collect, what kind of information you are interested in based on that you can write your own script. However, we have seen this three types of data collection, all the data are collected from learners behavior in the environment such as what the students interacting with the system. We have no idea what is student doing, we are not observing a students behavior, instead we are trying to understand students behavior in the system. Except in the classroom system, I mentioned that you can have the human observers to measure students motivation and affective states, all other information is based on students behavior in the environment. In today's class, we are talking about what is data collection, what kind of data you can collect in different environments and what data we need. Let us begin with the activity. If you are a learning analytics researcher and you are using metal to run a study, apart from the log data that is we should discuss log data of metal in last class, what are the sources of data important, how it can be collected based on your ideas, what are the sources can be collected when the student is interacting with metal. You can pause this video, write down your answers after completing the task, please let you in the video to continue. Data collection using sensors. If you are learning analytics researcher using tel environment, we asked what kind of data you would like to collect apart from the log data. You can collect data such as eye gaze information, facial expressions, brain waves, skin conductance, gesture or posture. How it can be collected? For eye gaze data, we can collect using eye trackers. There are portable eye trackers available. It can be attached to laptop to get accurate students eye gaze or simply you can use web camera of the laptop to collect the eye gaze information. For facial expressions, you can record students facial expressions using web camera in your laptop or external web cam. For brain wave and skin conductance, you can use modular portable devices to collect these signals. However, this may not be cost effective and not scalable to a large class. For gesture and posture, you can have an external video camera recording the students gesture and posture and code it, you manually code it after the study or you can have a pressure sensitive chairs, pressure sensitive mouse keyboards to collect gesture and posture. Here is the sample eye gaze and GSR data that is collected on a tell environment. Student is looking at the eye gaze and it is student is looking at a different parts of the environment and you can see the eye gaze is overlaid on the video. Below the students GSR data, you can collect data for multiple channels. However, processing data from multiple channel is challenging because we need to align the data from each channel based on time stamp and data from each channel will have a different granular size. For example, we can have eye gaze for a 60 hertz, 2500 hertz. There are eye trackers which can record students eye gaze up to 1500 hertz, what it means? If the eye gaze is collected at 60 hertz speed which means eye trackers is collecting 60 signals per second which is roughly 15 millisecond per eye gaze data, you will have a 60 data per second. For facial expressions, it is based on the number of frames in the video series. If I want to use 25 frames in the video, I will have 25 pictures then each picture can be converted to analyze the facial expressions. So you will have facial expressions varies at every 40 millisecond. So the facial expression varies at every 40 millisecond. The eye gaze data varies at every 15 milliseconds. Now aligning these two different granular size data is challenging. Analyzing and this multi-level data for learning analytics is beyond the scope of this course.