 Welcome to the Moodle Learning Analytics Workshop Overview for Developers. Learning Analytics are software algorithms that are used to predict or detect unknown aspects of the learning process based on historical data and current behavior. There are four main categories of learning analytics. Most commercial solutions are descriptive only. Those that are predictive or proactive make certain assumptions about learning that don't apply to everyone. The Moodle Learning Analytics API implemented in version 3.4 provides an open system that can become the basis for a wide variety of models. Models can contain indicators, predictors, targets, the outcome we are trying to predict, insights, the predictions themselves, notifications, messages sent as a result of insights, and actions offered to recipients of messages and can become indicators in turn. As we explore different parts of the system in this workshop series, we will focus on different design criteria and measurement methods. These criteria will be documented in use cases, descriptions of how the user of a system wants it to work. As we go through this workshop series, we will look at each of these steps in turn. This diagram shows a technical overview of the Moodle Learning Analytics API. Developers can extend this model or can create new models using the provided API. We will focus on different aspects of the API in each workshop of this series. Core classes to extend are found in the Moodle Install Directory in Analytics Classes Local. These will be discussed in greater detail in later presentations. These are the classes included in the models shipped with Core. They provide good examples for further development. Core classes are also found with the Moodle components they relate to. This usually includes classes related to Core activity modules, including indicators. While the classes in Core are worth looking at for examples, most external developers will be contributing code in the form of a local plugin as shown here. In addition to the code for the model components, the install.php, local underscore plugin name.php with your plugin name, readme.md and version.php are required. Among other tasks, the install.php file registers the analytics model with Moodle. We'll look briefly at the files in the Skeleton Learning Analytics plugin. The install.php file normally executes after any database scheme changes have been made by the plugin by db slash install.xml. Normally, a learning analytics model won't need to change the database schema. The install.php file registers the model with Moodle. This example is taken from the test analytics plugin provided on GitHub. Change the function name to suit your plugin name and change the other values to the indicated locations as needed. For your first repository, you do not have to implement a new target. You can reuse an existing one from Core. Note that you may also want to include uninstall.php to unregister a model and upgrade.php to make changes to a model. In either case, check whether your model is in use before making changes. All strings should be implemented using a default in English language file. At minimum, define your plugin name. As you add components, name them here and provide help strings. This example is taken from the test analytics plugin provided on GitHub. The string API documentation provides more information about these string files. Provide a standard readme.md markdown format or readme.txt file with the name of your plugin and some basic information. If there are any dependencies or special install instructions, list them here. Ideally, this file should act as an offline version of all the information that are available in the plugins directory. This example is adapted from the test analytics plugin provided on GitHub. You may also want to include changes.md, changes.txt, or changes.html with later versions of your plugin to pre-fill release notes for the new version. Provide a standard version.php file. In addition to the usual Moodle new disclaimer, set the version of your plugin based on the date you are posting and define the minimum value required version of Moodle and define the plugin name as $plugin component. Learning analytics models will require at least Moodle version 3.4 defined as 2017111300. Be sure to increment the plugin version each time you push changes to your repository. You can also include $plugin maturity and $plugin release. The version.php developer documentation page provides detailed description of the file contents. This example is adapted from the test analytics plugin provided on GitHub. Beyond the basic plugin skeleton, these are the key components in a Moodle learning analytics model. Each of these is the topic of a Moodle learning analytics workshop. For now, let's look at each component quickly to see how it is defined. The main component of a model is the target. Target definitions can be fairly simple. Most of the complexity is in the definitions of the valid analyzable, the set, and the sample, the elements, and the calculation of the target value for each sample. The text to be displayed is defined in a language string file and is localizable. Indicator definitions can also be fairly simple. Most of the complexity is in the calculation of the indicator value for each sample. As we will discuss in detail in the workshop on indicators, this is where scaling and centering needs to be implemented. Time-splitting definitions are usually simple extensions of the provided core classes, for example, core analytics, local time-splitting, a cumulative parts, which is itself an extension of core analytics, local time-splitting, base.php. To make a custom definition not based on existing classes, the primary required function is define underscore ranges, which returns an indexed array of arrays specifying start and and time values. Analyzer definitions are one of the more complicated parts of the Moodle Learning Analytics API. Analyzers are deliberately made general, so models can reuse analyzers and constrain sets within the target code. Insights, notifications, and actions are implemented as part of the target of a model. Currently, notifications are sent to those with the capability Moodle slash analytics list insight in the current context. This notification consists of a standard language string, insight info message, plus the name of the model, so no specific code is required in the target to implement notifications. In addition to the required elements for targets discussed previously, implement prediction underscore actions to construct a list of URL links to actions the recipients can take. In this workshop series, we will also focus on ethics of learning analytics systems, particularly aspects covered by the European Union GDPR, the General Data Protection Regulation, such as the Right to be Forgotten, and FERPA, Family Education Rights and Privacy Act in the United States. Addressing these issues is not simple. Deleting student data from the server can impact predictions for other students, but the student is still at risk of exposure if the data is maintained, even if the student's primary identification information is removed. No site security is perfect. The longer raw data, insights, etc. are held, the greater chance of exposure. We need to help administrators balance utility against risk. For example, we can save model data without exposing individuals. Most importantly, we have to be mindful about the models we design and implement. It is possible for models to have negative impact by distracting or discouraging participants in the learning process. Working within an institutional code of practice can help avoid this. Models which incorporate data based on race, gender, or socioeconomic status may easily give biased results due to historical biases in the data. Good Heart's Paradox is the tendency for a proxy measurement to become more important than the value it represents. This is a critical risk with selection of targets, but can also be a problem if too much emphasis is placed on indicators in messaging. Easy-to-capture measures are often over-misinterpreted and over-weighted, simply because they exist. It is tempting to think, well, this data is better than nothing, but that is emphatically not true. Some kinds of data can be misleading and harmful by focusing attention ineffectively or by discouraging learners. Remember, all indicators included in a model will be displayed as part of insight notifications. Involving teachers, students, and other members of the educational community is crucial. Participants at all levels are best able to correct errors in data collection and modeling. Genuine involvement in the process also helps to improve trust in the system. Finally, learning analytics models require continuous review and improvement. Model fit is likely to change over time as circumstances change at the institution. We will come back to this quote many times. Our purpose in creating these systems is not simply to make accurate predictions or to implement clever algorithms. It is to improve learning success, however we have defined that. All of our design and implementation efforts need to be conducted with this goal in mind. Now it's your turn to create a simple plugin, Skeleton. You can work from one of the published examples or start from scratch. You must include the required files and directory structure shown here. To earn the beginning developer badge, post your Skeleton plugin code to a public Git repository and provide the link in the Introduction to Learning Analytics Workshop at https.model.org slash analytics. Remember, you'll also need to review two other repositories to earn the badge. Thank you.