 Yn ymdodd eich myfyrdd, fel I'm Michael Feldsmann. I work in annual Galway. I'm based in philosophy. I don't really do philosophy. I do applied ethics around ICT ethics in particular. What is that? Application of ethical principles, code of practice, standards of good practice, wider issues to ICT applications and also the research process. Some of the ethical principles I think that are important to keep in mind respect for autonomy. So choice of different stakeholders, non-maleficence is doing no harm to different stakeholders. Beneficence, trying to achieve something of value, which is basically what we hear about and justice. So, non-discrimination, other issues around what is fair. It needs to be really ethical concerns. I usually in the ICT field I really consider very, very closely together with legal requirements, especially data protection but also potentially other requirements. So the incoming data protection regulation is a big deal that needs to be taken into account together with the more general issues. In terms of where to embed this in the debate where big data analytics algorithmic decision making are raised a lot of ethical challenges and that's part of my job to consider. I'm working as part of a research cluster on technology and governance at the Whittaker Institute at NYUIG and at the Centre of Biothicor Research and Analysis as well. So just to give you a bit of the background. Don't crowd your slides. I'm very sorry I did. So the reason for that was I had different slides until I found that the template we got looked rather different. So I'm not going to run you through all of this. So maybe just in terms of the challenges. So respecting stakeholder autonomy, what does that involve? Well, if we have learning analytics running in the background you have kind of dependency and power disparities between the organisation and users. So how can you achieve transparency, for example, that people know what is going on? Prevention of harm. So what can you do in order to ensure that nothing bad comes of using a system that's targeted towards something good? So avoiding disadvantages arising from the use of data and the result in classification. So if you're classified as being in the at-risk category, what does that mean? Does that come with any negative impact on those people even though we do that in order to help them? Safeguarding and data protection breaches, realistic assessment of the identification solution, for example, is something really important. So anything relating to data protection really needs to be considered in that context. Benefit. We've all talked here about the value. We're all hoping for value from data analytics. But don't overstate or simply assume benefits without the evidence. So we really need a reasonable evidence base on before we can make those claims. And then in relation to justice, well, the potential for unfair discrimination, I think is something that needs to be considered. A lot of ethical reflection is really fairly differentiated, so I didn't want to bother you with that, so just to give you a little bit of a general input. So what I think would be important to consider for the future while considering ethical concerns and the implementation of learning analytics in Ireland is really important to build trust and avoid reputational risk. So I think it is part of governance. So to really think about ethical considerations at every part of this pyramid is something that should be done. It's not something that comes into place at the very end when we talk about user satisfaction. It really comes into place already when you decide what kind of parameters to capture and what to do with them. Yeah, so maybe consider development adoption of a code of practice for learning analytics in the Irish higher education context could be something. So there's some very helpful GISC resources that might help in doing something like that. Yeah, that's as much as I want to say today. Okay, thank you.