 Okay, so I'm Kate. I lead partnerships and tiny technologists. A world's most trusted. What you see is what you get component for each text editing. And I'm also holding a certification from Nielsen Norman Group. It's a usability certification. And I've been working in branding content for more than 15 years. I hold other certification. Also won multiple awards in branding content, including cons lines, webbies. I've been on cons lines, jury ones. Also was named one of most influential women in native advertising and by Native Advertising Institute in 2019. And my work have been featured in Harvard Business Review, Financial Times, TechCrunch and many publications. I'm really, really passionate about content creation and all that surrounds it. And this talk was inspired by our dilemmas, our internal dilemmas as we've been figuring out how to best integrate AI capabilities into our product that didn't have AI capabilities before. And what kind of research-backed guidelines can we follow? Do they exist? And how do we do that to best, you know, to implement AI in a way that is not disruptive and is actually useful for our users? So first off, I'd like to share that Microsoft recently distilled lots of research, like 20 years' worth of research of human computer interaction. And they gathered 150 different recommendations that came out of all these studies. And they narrowed them down to 18 guidelines and then tested them out to be practical and applicable, which is really important. And this paper kind of sums this up and we will explore some of those guidelines during the talk. And the other really interesting recent articles by Nilsson that is called Articulation Barrier. And this recent article is about how written prompts that are often used in generative AI tools are dangerous in terms of inclusivity just because literacy is an issue. Even in highly developed countries, it can be a big percentage of the population can have issues with literacy. And if you're asked to write extensive prompts, it can be actually not very inclusive and therefore Nilsson advocates for a combination of UIs that have prompts that have, you know, commands over just relying on written prompts. So let's dive in. The guidelines give you several cases of interacting with AI functionality initially as you just start during interaction when you are already engaging with an AI capability. And the next one is if AI is wrong, if it makes a mistake and over time, how does it evolve over time? And here we'll try to figure out how it all can apply to Moodle and learning experience and how you add any AI capability into Moodle following those guidelines. So let's start with initial phase. It's important to set expectations because AI can do it all and it's important to explain to the user where are the limitations, what AI can and cannot do. For example, we just discussed plagiarism detection. It's important to explain where are the limitations, what's possible, what is accurate, what is not accurate potentially. And this gives you some expectations so you can figure out if you can rely fully on AI or you have to mind the limitation that are out there. So during interaction, a few important guidelines is to ensure that you adopt your services based on the context. You show relevant information. So for example, there could be AI capabilities that are adapting to learners' progress or specific interests or challenges. You need to also match relevant social norms and definitely mitigate social biases. I think some of the Nielsen's recommendations that I just mentioned related to literacy barrier could fall into this category as well, making AI-capable tools more inclusive. So the next stage is when the system is wrong. As we all know, AI is often wrong. There are many mistakes and hallucinations that can stem from just training data issues, biases, maybe just a mismatch between what user is trying to do, what AI is trying to do. So it's important to figure out how to react when AI is wrong. So first off, we need to support efficient invocation, make it, you know, ensure that you can actually trigger it on purpose and it's not done just by default. Support efficient dismissal, so you can ensure that AI is easily kind of dismissed when it's unhelpful for any reason, efficient correction. And then also important guideline, ensure that you explain why a system did what it did. And then over time, and there are multiple guidelines on how to approach AI capability over time as it trains and becomes more adjusted to use cases or specific scenarios. And I wanna highlight guideline number 14, update and adapt cautiously. I think it's really important just because AI system can accelerate really fast and they can quickly adapt to certain specific use cases. For example, if we add capabilities of adapting curriculum to learners' progress, it can go too fast and this could actually cause disruption because users need to understand if, you know, why changes are happening, what's going on, it should happen but it can't be that disruptive. So as mentioned, we added AI assistant, a capability in TinyMC that's powered by GPT-3 by OpenAI. So this capability is helping to enhance human writing but at the same time it follows the principles that I just shared. It definitely combines UI prompts, written prompts with commands that you can actually customize to your specific use cases and it can be easily dismissed. It definitely shares the limitations and it kind of part of a familiar interface. So just, I'm not sure, we have five minutes, okay. So I don't think we have time for that but just thinking about benefits of human-centered design, I just wanna highlight that I think that if we, there is a ton of research that is already out there that has been developed over 20 years, super valuable and at the same time, some of those papers are actually not about AI but they can definitely inform how we approach AI-enabled platforms to make them more trustworthy and if users can trust them then adoption will be much easier and more safe even in spaces like education that definitely require more caution about everything related to AI. And I encourage you to take a look at this paper. It has a lot of useful recommendations and also just the Nielsen's recommendation about prompts is also, I think, super important to know just because prompts are difficult to write for anybody but for some people it's impossible. So yeah, thank you so much. We can... Thanks very much. I think we have time for, I can't read that far away. Three questions. Okay, we have time for three questions. Three minutes. Three minutes, one minute per question. So if you want to put your hand up and Anna will come around with the microphone. Thanks a lot. I have a question on the, let's say, forward looking. I see at the beginning that you weren't Nielsen certified in order to ask you, do you think that AI or the integrations in the system of interaction is going to change standard heuristics or an activity like information architecture for the design of user experience? For sure. I, you know, I think that, oh, sorry. Yeah, thank you for the question. I think that it definitely will change the whole certification process and definitely the learning materials. Unfortunately, I got my certification before AI but at the same time I'm sure that, you know, based on what I know, some of the modules are updated as they were, for example, updated some point for mobile design that completely changed how we approach usability in many ways. But I don't think that we have enough data. Even this article that I referenced, Nielsen's article, he himself talks a little bit about how he doesn't have enough data to actually back this up except for percentage of literacy all over the globe. But I think that, you know, as we progress we get more data and then this becomes easier to certify based on more research backed, you know, information because certification by Nielsen Arm Group are all research backed, usability research backed, it's very important, I think. Thank you.