 Welcome to the Moodle Learning Analytics Workshop overview for educators and researchers. There are many questions educators and educational researchers have that we hope learning analytics can help us answer. Beginning with Moodle 3.4, Moodle Learning Analytics includes a model that can help answer the first question. The others are aspirational but well within the possibilities with this new tool. A new initiative could be a new textbook, a new online tool, a way of training teachers, a new student support center, or any other change in how the institution supports quality learning. Often in the past learning analytics systems have attempted to analyze past activities to predict future activities in real time. With Moodle Learning Analytics we are more ambitious. We believe a full learning analytics solution will help us not only to predict events but to change them to be more positive. Learning analytics are software algorithms that are used to predict or detect unknown aspects of the learning process based on historical data and current behavior. There are four main categories of learning analytics. Most commercial solutions are descriptive only. Those that are predictive or proactive make certain assumptions about learning that don't apply to everyone. Moodle provides a variety of built-in reports based on log data, but they are primarily descriptive in nature. They tell participants what happened, but not why, and they don't predict outcomes or advise participants on how to improve outcomes. Many third-party plugins also exist for Moodle that provide descriptive analytics. There are also integrations with third-party off-site reporting solutions. Again, these primarily provide descriptive analytics that rely on human judgment to interpret reports and generate predictions and prescriptions. To move beyond descriptive analytics, we need to be thoughtful about what targets we choose and even what indicators we choose. Our learning analytics should be aligned with the curriculum theories of our organizations. We need to know the purpose of an educational program in order to design learning analytics to assist in achieving that purpose. A single educator or educational program usually combines two or more of these purposes. Most learning analytics systems at this time are focused on social efficiency curricula, though we see elements of all four of these purposes. We will come back to these principles many times as we examine different parts of learning analytics systems. For more information about these curriculum priorities, see, for example, the book Curriculum Theory Conflicting Visions and Enduring Concerns by Michael Shiro. As we go through this course, we will look at each of these steps in turn, deciding what outcome we want to predict and how we will measure it, identifying clues we think might help us to predict that outcome, and deciding who should be notified and what options to offer if the outcome is predicted. The Moodle Learning Analytics API is an open system that can become the basis for a very wide variety of models. Models can contain indicators, also known as predictors, targets, the outcome we are trying to predict, and insights, the predictions themselves, notifications, messages sent as a result of insights, and actions offered to recipients of messages which can become indicators in turn. As we explore different parts of the system in this course, we will focus on different design criteria and measurement methods. These criteria will be documented in use cases, descriptions of how the user of a system wants it to work. Learning analytics systems go beyond day-to-day utility and offer all participants in the educational process a chance to take on the role of an educational researcher. Learning analytics can be used by individual educators in small-scale action research, but also supports research on a much larger scale and a much greater level of detail than has been possible before. There is a tendency to align products with targets and processes with indicators, but processes can also be targets we are trying to detect. We need both process and product data in order to make predictions and prescriptions. This quote comes from Simon Buckingham-Shum's keynote speech at the 2017 Learning Analytics Summer Institute. If a new learning initiative is being trialed, learning analytics can be used to compare two cohort groups using the same model. Researchers may also want to consider comparing two different models to see which one produces better predictions, especially two models using the same target, but different indicators. We want to assist researchers, but we also want to help ensure research quality. What does the evidence show about the effectiveness of learning analytics? This table summarizes 123 studies across multiple sectors against four propositions. Many of the studies don't include evidence supporting any of these propositions. This problem is exacerbated by publication bias, which is the tendency to publish only positive results, and many studies may suffer from the Hawthorne effect, whereby the act of conducting a study tends to improve results no matter what the study involves. In this workshop series, we will also focus on ethics of learning analytics systems, particularly aspects covered by the European Union GDPR, General Data Protection Regulation, such as the right to be forgotten, and FERPA, the Family Education Rights and Privacy Act in the United States. Addressing these issues is not simple. Deleting student data from the server can impact predictions for other students, but the student is still at risk of exposure if the data is maintained, even if the student's primary identification is removed. We also have concerns about the ethics of providing notifications to teacher students and others in the learning process. We could inadvertently lower teacher expectations for students or students' expectations for themselves. On the other hand, if we have insights that could improve learning, we may have an obligation to act on those insights. Most importantly, we have to be mindful about the models we design and implement. It is possible for models to have negative impact by distracting or discouraging participants in the learning process. Working within an institutional code of practice can help to avoid this. Models which incorporate data based on race, gender, or socioeconomic status may easily give biased results due to historical biases in the data. Good to Hearts Paradox is the tendency for a proxy measurement to become more important than the value it represents. This is a critical risk with selection of targets, but can also be a problem if too much emphasis is placed on indicators in messaging. Easy to capture measures are often misinterpreted and over-weighted simply because they exist. It is tempting to think, well, this data is better than nothing, but that is emphatically not true. Some kinds of data can be misleading and harmful by focusing attention ineffectively or by discouraging learners. Remember, all indicators included in a model will be displayed as part of insight notifications. Involving teachers, students, and other members of the educational community from the beginning is crucial. Students at all levels are best able to correct errors in data collection and modeling and question unconscious assumptions. Genuine involvement in the process also helps to build trust in the system. Finally, learning analytics models require continuous review and improvement. Model fit is likely to change over time as circumstances change at the institution. Throughout this workshop series, educators and researchers will be asked to document their ideas about new models and model components in the form of use cases. There are many ways to define use cases, but for this workshop series we will use this general structure. In addition to the title, provide a one-sentence summary of the desired functionality in the form of a user's story. Be sure to specify the primary actor or actors who will interact with your system and the scope or context of the model. The description in the use case should be a paragraph or two informally describing how you expect the system to work. Include a link to full discussion about the desired feature or functionality can be very helpful to resolve ambiguities during development and helps to provide evidence of support for a proposal. We will come back to this quote many times. Our purpose in creating these systems is not simply to make accurate predictions or implement clever algorithms. It is to improve learning success, however we define that. All of our design and implementation efforts need to be conducted with this goal in mind. Now it's your turn to sketch out the design for a new learning analytics model. Write a use case for a learning analytics model based on a specific learning question. Describing context and samples include all the elements of title, user story, primary actors, scope, details, and ideally a link to a discussion about the need for this model. This discussion can be located in this workshop's forum defining learning analytics goals and limitations or elsewhere. To earn the beginning designer badge post your use case in the introduction to learning analytics workshop at HTTPS Moodle.org slash analytics for review. And remember you'll also need to review two other use cases to earn the badge.