 So I would like to first acknowledge that the UBC has different sites. We are going to acknowledge the two main campuses. The first one, UBC Vancouver, which is located on the traditional ancestral and unceded territory of Muslim people. And UBC Okanagan, which is located on the traditional ancestral and unceded territory of Salih people. The plan for today, we are hoping to be able to, and I just want to emphasize this is an introductory workshop to both UDL and AI and how we can integrate these two. We are going to cover the three principles of the UDL and explain how AI can enhance these principles or help us to actually use this in our programming courses. We will talk about the value of integrating AI into UDL framework and talk about the benefits, as well as to show at least one AI tools or technique which you can use in your own context in your program to help us to move towards a more inclusive and accessible teaching and learning environment. I would like to start with this code from Albert Einstein, which says education is not the learning of facts, but the training of the mind to think, which is why and what we are here to explore. It kind of highlights why we are here as an educator to explore. Our goal is not only to impart knowledge, but to foster environments where critical thinking, creativity and inclusivity are the forefront. And today we will dive into how AI within the UDL framework can transform our educational practices, ensuring that we are not just teaching facts, but empowering mind to think, question and innovate. I'd like to also emphasize that we may even go beyond and think it deeper that we are not only for training of the mind to think, but also heart to care for others and to become good citizens. So first we're going to just briefly to talk about UDL introduction to the UDL and then after that Lucas is going to talk about AI and the integration of those AI to UDL framework. To start with UDL, the framework you never saw designed for learning framework was created by researcher at Center for Applied Special Technology, known as CAS. For those of you who know about UDL or want to know more about UDL, it was started back in 1980 as a result of alignment of three conceptual shift. Advancement in architecture design, development in educational technology and discovery from brain research. So these three conceptual shift cause to create this framework. And the universal design movement in learning has its roots in actually the universal design, which is in architecture, which focus on removing barrier for physical environment. Universal design for learning is an approach for curriculum design that can help teacher, instructors, educators to customize curriculum to serve all learners regardless of their ability, disability, age, gender, culture, linguistic background. It affects UDL provide a blueprint for designing strategy, activity, assessments and tools to reach students with diversities and help them to learn to fully participate in an activity. In my own experience being involved with UDL and studying it for the last 10 years. I noticed that UDL is not only for curriculum development, you will automatically broaden your perspective and a way that you pay attention to check on these principle in every moment and in any way that you can in your daily practices in different activities that you have. Why UDL? So there are different reasons that we are focusing on using UDL and why we are actually having this session and our programs such as UDL follows program at UBC. There were some projects such as Beyond COVID, which was a project started two years ago based on the lesson learned during COVID and to see how we can work. Hundreds of staff, faculty, students got together from both campuses to see that what would be the future of teaching and learning and how we can use those lesson learned in order to move forward. One of the recommendation was UDL. Another thing is UBC is a strategic plan, indigenous strategic plan, inclusion action plan. In all of them, we are focused on how we can move towards a more inclusive and accessible campus community and organization and institution. So that's again an area that UDL focus. Also, BC British Columbia digital learning strategy against similar approach to UBC representative from different institutions got together to think that how they can be more inclusive and accessible in digital learning. One of the recommendation was UDL framework. The overall goal, the reason why we are here is that to ensure that our learners needs are met and to be able to identify and remove barriers and that's why we are going to study more about UDL today. There are three principles that we are going to talk more in details as we go through the program today during the workshop. One is multiple means of representation. That's one of the principles ensuring that the content or material is provided in more than one way. The other one is providing multiple means of engagement to engage your learners in more than one way and providing multiple means of action and expression, which giving learners participants more than one way to participate and demonstrate their knowledge. All these three principles again goes back to how our brain works. The network in our brain, there are three networks. One is recognition network, which is focused on what's up learning. It enables us to interpret words, see the words, which is aligned with that principles. The effective network is about why of learning. As you all know, some students get motivated by group activities, some get scared, some like new ideas, exploring new things, some they don't. So this network is aligned with that principles and the strategic network is about how of learning, which help us to plan, execute and so on. So that's aligned with this. So just want to briefly to mention that there are research behind this framework and it's evolving as we talk based on the research coming. Again, our goal is to be able to identify barriers and to remove those barriers and the systemic barriers. What we mean by that is the policies, the procedure and practices that we have intentionally on or unintentionally that can prevent individual having equal access to a service, a product, a process and not being able to fully participate in a situation in a classroom in an activity. Some of the example of this systemic barriers are unclear learning outcomes, one way of assessment throughout the course, providing information in one way, classroom that they don't have wheelchair or wheelchair accessible tables, intensive textbooks for those who cannot afford it, field trips, lack of transparency. We get into those in more details, but I just wanted to say that these are some of the barriers. Similar to a physical environment that might have barriers like stairs might be for those who are on wheelchair not considered as a barrier. In our curriculum, instructional environment, we might have barriers that we are not aware of it. So with that, I am going to pass it on to Lucas. Wonderful. So thanks, Afsana. That was a great introduction to UDL. So what I'm going to do now is start talking about Gen AI, kind of an intro to Gen AI. And I think at this point, most of you have probably been introduced to it. So we'll keep that fairly short, but specifically in the context of UDL. So before we do that, what we found is really helpful in these workshops is to kind of get more familiar with generative AI tools. It's worth you following along with us. And we've created a worksheet here. So you'll see on the worksheet, it's bit.lead gen AI UDL. You can also use your QR code on your phone and open it that way if you want. And if you can have the worksheet open, what you'll find on there is all the prompts we talked about. As we go through different prompting approaches, I encourage you to open up one of these tools, which I'll discuss in a second, and try these prompts out, adapt them for your own use, and just kind of work through it with me as we go through. So a couple of tools you may be interested in using. Can I get a thumbs up now if you've used chat GPT before. I'll just take a look on the reaction. So lots of thumbs going up there. So if you have chat GPT, you may want to log into it. I'm going to be doing a lot of demos with chat GPT for today rather than chat 3.5 chat GPT 3.5. And the reason for that is for has the highest quality responses out of the chat GPTs right now and just to give you a rough idea of that. Chat GPT for received or was able to get in the 90th percentile on the standard bar exam chat GPT 3.5 scored in the 20th percentile. So there's a significant difference. If you really hate logging in and you don't want to touch any tools you can go to talk AI. It's full of spammy advertising be careful not to click on the wrong arrow. It does give you a way to play with the tools without logins. MS copilot is available at UBC. It's had a privacy impact assessment on it. And if you go to any Bing just Google Bing. Bing chat, you'll be able to jump right into there without a log in. If you're at UBC, you can get in through your teams. It uses in the background chat GPT for is its LLM. However, somehow, I mean, I don't know how they did this, but somehow Microsoft made it not as good as chat GPT for even though it's using the same large language model. And then Google Gemini. Google Gemini is relatively new. I'm going to do some demos today with Google Gemini advanced. It's a pretty slick interface, pretty interesting project that allows you to link up to your Google docs if you're willing to take that privacy jump, your Gmail and so on. And it's also quite a powerful large language model. So I'll be doing my demos with all of those different tools and I encourage you to follow along with one of them. So what is generative AI? Generative AI works by analyzing and learning from a vast amount of data. The data has been gathered through scape scraping the web. And they purchase large databases of web archiving that they're using, as well as scraping different books of books to archive, I think has 167 or 87,000 books in it that have been scraped as well as all of Wikipedia. So it's scraping this data. It's starting to be able to understand there's a lot of air quotes here. So it's not really learning, but it's it's able to go through this data using complex word prediction. It generates new text, it generates new image, etc. So if you've used Netflix recommendations, that's more traditional AI, and that doesn't generate the key here is the generative AI generates something new. That means whenever you prompt chat GPT, it's going to be a unique output, which creates some challenges in itself. I really like the term augmented intelligence and I want to introduce that here today. Artificial intelligence focuses on replacing the human, which I think in some cases generative AI is going to do. But augmented intelligence thinks about augmenting what we do. How can we improve our creativity? How can we augment our teaching practice? And right now as it stands, we really need a human in the loop with these tools because of the errors on the hallucinations they make. And so I think the term augmenting is a helpful one to think of, especially in the context of UDL. Just like the web created a whole new set of issues around accessibility, Gen AI is also already starting to create access systemic barriers to use as well as equity issues. So one of the key issues already that's coming to for is pay to pay. I'm using chat GPT for right now. I pay $20 US a month for it. It's a much more powerful tool that I can use Google Gemini advanced is a better tool. So when you think of doctor O's term and shitification, the idea that the tools we have and shitify and get worse over time, I think the pay to pay issues going to become bigger and bigger. Number two is accessible interface issues already. We know that chat GPT being both don't have very accessible interfaces than in themselves. They're difficult to use if you're visually impaired. They are improving over time, but already we're seeing this issues. Number three and one of the reasons that I think you attending and us talking about this in workshops this way is the emerging skills is skill inequities around these tools. So we have some folks who are able to make these tools sing use really strong prompting to get great output. We have other folks who may not be using them much and we're starting to get a skill imbalance which affects our students equity and generally affects societal equity as a whole. So, what are some of the strengths of these tools and, you know, three real big opportunities with these tools I think one of them is the ability to generate on demand custom content very quickly. This is something relatively new with tools. Number two is how we can integrate it as part of our assignments and our assessments and have more personalized assignments and assessment. This could mean having different assignments and assessments for different students. This could mean taking open assignments and adapting them for our class or teaching context. And third is the idea of scalable tutoring and I talked about that in a workshop yesterday. We know that tutoring is a more impactful educational practice. That's what Bloom found in his Sigma Sigma two problem paper in 1984. I think that continues to hold but we also know not everyone has access to tutoring. So, Jen AI can give us a new way to scale tutoring with our students. And we're also already seen some really interesting uses of generative AI in the accessibility and inclusivity space. I'll get a thumbs up again if you've ever heard of be my AI. I'm just going to take a look in the reactions here. So it used to be a app called be my eyes. And the way that it worked is someone would take a someone who is visually impaired would take a picture of say they were shopping for clothes. Take a picture of the sweater section and say which one is the red sweater. And it was crowdsourced so people would type in their answers. Now this is an app using generative AI and computer vision. It's able to help people who are visually impaired by showing them the environment around them. The second example I wanted to give you is something called goblin tools. And this is a tool that was built by individual software developer as a way to support people who are neuro divergent. So in particular people with autism and ADHD and what this tool does it is it's using chat GPT, but it has a number of different tools and a couple that I really like as someone with ADHD. One of them is the magic to do list where I'm able to take any task and break it down into multiple further tasks. So in this case I did study for an exam and if I clicked on create a study schedule, it would break that down into a task list. The judge is really interesting on this one as well. It was built for people with autism, but I think it's also something that's very helpful generally is you can put your email into it and say what is the tone of this email. I know I use GPT for this quite a bit myself. I'll be like, is this email angry or is it not angry and kind of get an idea of tone. So we're already seen a lot of tools that are building in accessibility and inclusivity features. And I'm going to turn it over to Afsane now. Thank you so much, Lucas, for introducing Gen AI and some of the tools. There was a quest. There is actually a question in the chat about will you be offering these more successful AI tools in the future. I hosted the Gen AI resource page in the chat, but I'm just wondering if you have any updates about if UBC has any plan to include more tools. I would imagine they will. I don't have any updates as of yet. There's three committees right now. UBC one is an AI committee across both campuses that's looking at overall AI. There's also an LLM committee looking at large language models. Being co-pilot or sorry MS co-pilot is now available for UBC faculty, staff and soon for students at the enterprise level. So that's going to give us more data security. So as more tools go through a privacy impact assessment, I would expect there will be more at UBC. Thank you so much Lucas. So now we're going to talk about each principles in more kind of details and then Lucas is going to show through techniques through AI and some of the tools how we can actually use and those particular principle and integrated into our program and courses. So as you know, UDL is for inclusive education and is ensuring equal access and by incorporating AI we can amplify inclusivity through personalized learning and tailor learning experiences. So the first principles is multiple means of representation, which encourage presenting information and content instructional material in more than one way through text, visual, audio and interactive experiences. Some of the techniques or examples that you might see these principles in action or some of the strategy related to these principles is providing text equivalent for podcasts or video. For example, if you're using a YouTube make sure that it has caption or if you're developing a new video, it has transcript for online courses and resources. You may embed support for glossary vocabulary or illustrations providing translation sites or links for multi-lingual glossaries using concept map to show the link between the idea and the topic. So within your course, you see that that part is bolded because later on Lucas is going to pick a few of these examples and show that how AI can help us to get and use that strategy in our course. Highlighting key information in text and providing scaffolding is some of the strategy that we can use as a multiple means of representation. I'm going to briefly to go and discuss a few tips about how you can make things more accessible and inclusive related to this principle. The first one is providing alternative text for your visual images and I wanted to emphasize the difference and kind of highlight the difference between caption and alternative text. Normally when I'm asked what is alternative text, I would say it's a message that you want to be delivered to your learners to your audience through that image. So if I ask you to close your eyes and explain that image, that would be the alternative text, that would be the definition. And in this example, you see the differences. One of the challenges being in the field over 20 years as an instructional design with that experience and background. One of the challenges for us to become more inclusive is that providing alternative text or transcript for videos. So again, Lucas is going to share some of the examples that helped you to do this faster and ensure that. And I want to say that if you provide an alternative text for an image, it's not for those only who have vision impairment. An accessible design is a good design. This is going to impact for those who have low bandwidth and they can see the image or the image is not loading well. So or for others. So all these practices or all these strategies actually helping us to go and move towards a more accessible design, which is good for everyone. Another tip would be providing, describing your hyperlinks just because the link is broken. People can search for it and find that information as well. The next one is using the color contrast, not using the color to convey information. And what I mean by that is that you are more than welcome, feel free to use any colors, but do not use it for conveying information. In an example is that do not say, for example, the correct answer is in green or click on the red button to find out the result. Because those who have color prime issues, which is in male or more than female, about one in eight men have the color vision issue. So if they have red and green color vision problem, they'll see every the same color. For example, in this, it shows the language of India. As you see that some of them are in red and some of them green in the left hand side and the right hand side shows that how people with that particular disability might see this. So it's important to use a proper color contrast in your visual. Now, in the next few slides, Lucas is going to share some of the tools and techniques he's been using in order to use this principle in the courses and program. Lucas, back to you. Wonderful. So I'm going to go over how to provide multiple means of representation with AI. And these are just a few ways. There's going to be many more. And the way we're going to do this is I'm going to do about three demos and then we're going to do a group activity where you have a chance to work through the same demos or kind of your own practice. But I encourage you, as I go through this, please feel free to follow along. So I'm going to move away from the slides here and I'm just going to go right into our worksheet. And again, you'll find everything within the worksheet. So the first demo I wanted to show you is creating alt text for an image. And I'm going to use the same image as Afsane used of the crops as well. And I'm going to use GPT for an alt text is a little tricky. What I've been finding in my experiments and this is always changing and emerging is I cannot get chat GPT. Well, chat GPT 3.5 doesn't allow for uploads. I can't get Gemini or Microsoft Bing to give me good alt text. I don't know, again, how Microsoft managed to ruin GPT 4, but somehow it has it makes errors that are they make it more difficult to correct than to use. But with GPT 4 and the computer vision, I've been able to use use these tools regularly to create good alt text. So you'll see in my prompt, I said create alt text for this image to assist someone who is visually impaired. So by adding this specific part on the prompt, I'm hoping that it will make more meaningful and concise alt text. And let's see what we get. What's interesting about these demos is we never, you never know what's going to come out. But you'll see that it's kind of getting the main idea that the plants on the left side are underdeveloped as well as what the particular signs say. So it says this illustrates the impractive proper fertilization. So I could ask it for less characters, but generally for all of my slides, I've been using GPT 4 and getting fairly good alt text. Again, it does need a human in the loop to check it. The next example I wanted to share with you was the idea of changing text. So I went on the internet and I found an example of incredibly complex text. So I'll give you just a chance to look at how complex this is. I think I looked up really difficult to understand text. So now I'm going to take this text and I'll just use GPT 4 again because I'm there. I'm going to paste in this text. And I'm going to use this prompt. So the prompt is transform this text in the following way, explain it like I'm five, create a glossary of terms and translate it into Farsi. So that's the prompt we're going to use. And I'm just going to separate it. Great. So now you'll see it's taken that same text. It explained it like I'm five. It's giving me a glossary of terms around that text. And in a moment, it's going to translate that text into Farsi. So when we think about multiple means of representation. Again, we need a human in the loop checking this, but this is giving us a really powerful ability to transform text in different ways. And if we want to think even farther into the future, I think the idea of a student being able to transform their own text simultaneously as engaging with it. I would imagine there'll be apps around that process as well. Now I don't speak Farsi, and this brings up one, you know, a huge issue here is that these tools are 100% confident, but maybe 80% accurate. And we don't always know where the inaccuracy is. So it still does require a lot of checking. The third example I'm going to give you is a mind map. And again, one of the ideas around multiple means of representation is creating concept maps. So I'm going to use chat GPT for this, but I'm going to use a plugin that's built into GPT for called diagram show me. So I've added this, and I'll paste in the prompt, and I'm asking it to create a mind map of cell biology at a fourth year level. So again, I never know if these demos are going to work, but fingers crossed, we should get a mind map. So here we are. There you go. It's not going to work. We'll see if it does after the second try. And if not, I'll show you another way we can create a mind map. Perfect. So now we have a mind map of this, we can open it as a full screen mind map. We can also open it in Miro, and we could edit it within Miro. So if you don't have access to GPT for instead of using GPT for you can just have it create a mind map using headings, or have it create a mind map in CSV form. And there are there are tools you can actually upload to and they'll create your mind map for you. So over to you now, what I'd like you to do is to spend about five minutes, probably four minutes, and I'd like you to transform a piece of text. You can use the piece of text I shared think about all the different ways you can transform it. If you're using, if you have GPT for you can create alt text for an image if you don't have GPT for you can use MS copilot to create alt text for an image. Keep in mind that there could well be errors with the Microsoft alt text and number three create a mind map just by having it outputting as headings or a CSV file. All right, so that the instructions for this activity are also in your worksheet. I'm going to give you about four minutes now to play with it, and then we'll share back a little bit about what you got. So please go ahead. So the second principle we are going to talk about is multiple means of engagement which which focuses on engaging the students and maintaining their interest in learning. It encourage us as educators to think about different ways, different types of activities to keep in our learners our audience and the participants engage in that teaching and learning experience. Some of those examples that we are going to share in the next slide. Lucas is going to use AI tool to demonstrate those and to see that how we can actually create more engaging environment. And the main thing is that you need to make sure that that learning experience is meaningful and relevant for people so you can use different methodology different ways strategies technology and diversifying all these process in order to become more inclusive and accessible and engage your learners and make the learning more meaningful. You can invite guests speakers to your classroom to your event creating a detailed courses schedule so that it's easy for them to follow creating group link. And a kind of a community agreement for group work where people know what means what is expected from them, why they are doing one activity, and they have a guideline to follow and to compare things. So again, creating group break building peer to peer feedback again community in engagement and collaborative learning is one of the those things that we encourage in order to make the program more inclusive. Allowing multiple attempts on exam. Follow up question if you ask them to watch a video you may have some activity or question after it to give them the reason. So kind of that what why and how approach what what is this and why we are doing and how we are going to do it. And also providing feedback. These are some of the examples that you can use some of the strategy that you can use in your classroom, which is aligned with this principle of UDL to ensuring that you give them different options to be involved, participate and have equal access to the classroom. Over to you, Lucas. Lucas you are muted. Oh, I hate that. That's the worst feeling ever. So rookie move. So what I said, just to myself here in my place was that we're going to do the same thing as last time. So I'm going to go through two different demos with you of how we can use gen AI to create more multiple means of engagement or to leverage gen AI for that. And then you're going to have a chance to do it as well and I'll try not to mute myself. So I'm going to go back to the worksheet again and again if you want to follow along with me, please do. And we're going to take a look at multiple means of engagement. The first one I wanted to show you is creating rubrics and these tools are fairly good at creating rubrics the only challenge that I've been hearing with this is that sometimes it takes a lot of revision on the rubric to make it accurate or to make it work the way you want it to. So for this demo I'm going to use Google Gemini just to kind of give you a feeling for that and I'm this is Google Gemini advanced. I'm okay to use this per month. I think I have another month free before that kicks in. But in benchmarking test Google Gemini has been almost at par with chat GPT for so above in some ways below in other ways, and it's kind of trial and error reading about when it's strongest and when it's chat GPT for might be So my prompt is act as a communicating and science instructor. So this is using a persona by using personas we can get more accurate data from the models specializing with the specialization and science communication create a rubric to assess third year students blog post about a citizen science project so I'm being very specific here. The rubric should include the following specific list of criteria, and I've outlined all of those aspects so I'm going to give it a try now. And again, never quite knowing what it's going to get in the demo sometimes it will give me these rubrics in tables sometimes it will just give me text for the rubric. Hopefully it doesn't just stall on me. We'll see what the output is for this particular one. So this is the rubric that I got within a table and you'll see that I can kind of scroll through here. So it's giving me all of the different criteria, the different criteria that I set up within a table so if I wanted to export this now to Google sheets, I could click here, and it would export this immediately to Google sheets for me to access. So creating rubrics it can be good at doing that but again it can take some work. The next thing I wanted to do and I'm not going to demo this one but by using this rubric, I could put in an example of writing actually let's just go for it so I have a sample blog post here. I'm going to copy this sample blog post out. I'm going to go back here to Gemini and I'm going to say use this rubric to evaluate the following blog post and we can start thinking about what it would be like if we started having gen AI as part of the peer assessment process for students maybe as an early peer assessment. Before their peers get to it may be a part of the overall process. So it's going to go through the rubric. It's scoring it for me based on that rubric and giving me suggestions for improvement and again always needing a human in the middle here, especially when we're getting into things like evaluation. It could be very challenging if there are errors related to that. The next example I wanted to share with you is the idea of creating interactive personas to engage students and the idea here is using generative AI as a space of dialogue where students can dialogue with someone. So for this example, I've used a pretty large prompt here and just a tip that I found helpful is I actually use chat GPT to help me develop my prompts. It's quite good if you put a prompt in and you say improve this prompt. So imagine you're an urban planner working for the city of Vancouver in the 1980s. I've outlined the expertise of that urban planner, what their specializations are. I've outlined how I want them to respond and what their tone should be. And at the bottom I say talk to me in a conversational style one question at a time. By doing this it can make the tool more interactive. So let's see what we get. Now it's saying let's delve into the urban planning landscape in 1980s. One of the key issues was increasing demand for residential and commercial spaces. So 1980s I have some news for you about how that went. How did Vancouver respond to this challenge, particularly in terms of zoning and development and I'm just going to put high density. Now I'm kind of going back and forth with it using this as an engagement tool to kind of connect me into that particular era. And it's talking about general zoning policies. Of course this would need a student to critically evaluate the output or need a faculty member to critically output. And there's my next question. Again, over to you. We've talked about a couple different multiple means of engagement. And what I would like you to do now is spend about again we'll just use four minutes. Have the model I would recommend using MS copilot if you can to create a rubric for you for an activity or an assignment. If you have time have the model create a rubric for peer assessment and then correct a paragraph you put into it that might take longer so if you don't get to that that's fine. And number three create a persona from your discipline that students could use to kind of go back and forth and have a dialogue. So give everyone four minutes on this step it'll take us right to 11 and then again we'll share back with the group. I just want to mention that some of you mentioned in the chat that you've been using chat gbt 3.5 for images only chat gbt for provide alternative text so that's what earlier Lucas mentioned about the pay. Jim and I also you need to pay for it in order to be able to use it. So some of these tools as still not for copilot you should be able to use it. Use it for images if you're signed into it according to Lucas. I just wanted to share with you and also mentioned that the importance of prompting is that how important it is and I am also working a lot with prompting and I see that how we train even logging to that chat gbt and using it. I normally asked to write me based on the data that I give to chat gbt write me for example this report and sometimes I say following that can you write it less formal. And what I see as a result is that sometimes without even asking me, it gives me a formal and less formal kind of format so I think that the reason that we might put the same prompt get the different responses is that how you prompt. After a while chat gbt get used to the prompt that they receive and respond based on the earlier request so that that is another reason behind it. Another thing is the importance of considering that it's always important to have human to evaluate the result and the output so that's how we get even if you get as simple as like alternative text for your images is important to read through. To ensure that that's the message that needs to be delivered. So multiple means of action act and expression, which emphasize allowing your students your participants to to be able to demonstrate their knowledge and understanding in different ways, which enable them to choose the mode of expression that best suits their ability and preference. That's the last principles in UDL. Considering that when it comes to assessment, a final exam activity formative or summative doesn't matter think about what would be the options that you can give your students to demonstrate that specific knowledge if you have a particular outcomes in mind. Is it necessary to have an exam for it or for them to write an essay or you can give them different option to have an infographic to have a podcast interview. This particular principle focus on how you can provide different options for your learners to demonstrate their knowledge. A few example of this one is providing again, similar to what I mentioned, options for assessments, providing multiple tools for composition or construction, providing sentence starter or prompt. Multi-part assignments, if it's a complex assignment, you may divide it in different sections and at each stage give them feedback, opportunity for mentorship, as well as self-assessment activity or a way to provide feedback, automatic feedback. Also, if you have activities in your classroom, you might think that what would be the different options that you can give your students to participate in that activities. Sometimes as simple as if you have a video or an essay or article, you may give them an option to see it prior to the classroom. Or if you have something that for five minutes you want them to think about, reflect on it, you may also give them that option to reach out to you after the class. So it's just a way, it's in your context to see that what are the ways that I can give more options to my learners to show their knowledge to me. So that's the focus of these principles and I'll pass it on to Lucas. Wonderful. So these are my, these are the two most fun for me. I'm just going to stop sharing so I can share with audio in a minute. And one of the demos again, we'll do two demos and then give you a chance. One of the demos that I'm going to share the goal of it is to give students a chance to get automatic feedback through playing games. And the other example I'm going to share is an example of kind of tutoring or mentoring with audio. Now the examples I'm going to share, sorry, I'm just trying to set my sound up here. There we go. So the first examples is using generative AI to create games for students and I find this one really powerful. If prompted well, these tools can be kind of almost game engines where they can play games with students to help them learn to get that automatic feedback. Recently, my son was studying Wimus. And if you've ever studied Wimus, it may be one of the most boring things to study ever. And what I did is I had generative AI make a Wimus game with him so that he could go back and forth and learn about workplace safety and Wimus while at the same time playing a game. So the game I'm going to demo here, this game was created by a history professor. And what he wanted to do is to simulate Paris and being a clock, a clock, the theory in medieval Paris for his students. So I'm going to demo this one. This works quite well. I find on GPT for and if you look at the prompt itself, you'll see it's a very complex prompt that he's done. So, you know, you were a historian, he's given it a persona. It should be authentic, accurate and gritty feeling. There should be turn taking the game ends on the 10th turn here are the commands for it. Let me make my screen a little larger so you can see this one. Great. So let's try that. So now it's going to talk about the current status. Here's the commands. And it's going to kind of explain it. You find yourself in a dimly lit room, the air thick with the scent of herbs and the distant sound of coughing. Your table is cluttered outside. There's panic and despair. What do you want to do? And I'm going to select the command diagnose. So as you step outside the grim reality, the play kits you here's the diagnosis process. How do you want to treat this person? I'm going to use bleeding. And I can just go through the command. So I think it's really interesting to think of this ability to using prompts or there will be more and more apps to do this. Having specific games for your discipline where students can get automatic feedback and at the same time be engaged in their understanding. The second example I wanted to share with you is a tutoring example. And I apologize for the ad. We'll just have to go through that for a second. I don't worry. I didn't monetize this. So the example I'm going to show you is using chat GPT for on my phone and using the audio feature. And what I've asked the generative AI to do is to have a so cratic dialogue with me about the state of nature. And I'll just play a little bit of this for you. I find the voice functionality is incredibly powerful in these tools, including changing languages like act as if you're in a French cafeteria practice my French. So let's make sure you can hear this Afsani. Can you give me a thumbs up if you're able to hear it. I'd like you to have a so cratic dialogue with me about the state of nature. Make sure to ask me a lot of questions to gauge my understanding. Certainly, let's have a so cratic dialogue about the state of nature. What do you understand by the term state of nature. Isn't it a philosophical construct. Yes, you're right. The state of nature is indeed a philosophical concept. Can you elaborate on what that concept entails and how it's often discussed in philosophy. Isn't it related to governance. I'm not really sure you're on the right track. The state of nature is often discussed in relation to governance and political philosophy. In this context, it's used to explore what human life might be like without organized government or. So I won't go through that any further, but just to give you an idea of some of the power of these tools kind of having a spoken conversation. And what that could mean when we think of the tool as perhaps a mentor a tutor for our students, a way to give them a different type of action and expression. So for the last activity, what we would like you to do is to see if you can create a game for your topic. So this can be a tricky one. Sometimes it will give you a board game instead of play a game with you. Try using phrases like play a game with me or your turn first to get it to actually do turn taking. So see if you can have the model act as a tutor for you in a subject area for your discipline. So I would use a prompt like act as a tutor for acts. Help me understand this start by asking me the first question by using start by asking me the first question and get builds that interaction in rather than having it just give you all the questions at once. If you want to try you can break down a complex assignment into two or three part assignments that includes peer assessment, but perhaps focus on the first two functions now and again you can find this in your worksheet. I'll leave this screen up for a minute. And I will give you what's our time here. I'll give you again, we'll do another four minutes to do this. Okay, why don't I stop sharing there and just to kind of recap where we've been before off sunny jumps in with the last activity. So we've talked about multiple means of representation. We looked at creating concept maps, translating text in different ways or changing text, transforming it. We looked at all texts, then we looked at multiple means of engagement. And we talked about a couple ways that you can create more engaging engagement with your students, including rubrics. And then finally we ended at action and expression and we talked about how we can create multiple means of acts, action and expression and we just looked at two ways. So thinking about kind of an interactive tutor mentor, as well as a game and now off sunny, kind of as a very last session is going to do a larger sharing activity with us to end things off. And before I do that, just because I'm sure some of you will be leaving soon to mention that we will follow up with you with the slides, the worksheet as well as the session recording. We should have those for you next week. So go ahead off sunny. So for the last activity, we want to share with you this link to our padlet where you can share some of the ways that you use or plan to use gen AI to enhance UDL in your discipline or area. I want to emphasize about the reasoning behind this workshop was that again UDL is the main purpose for us in instructional design, one of the ways for us to move forward in order to make things more accessible and inclusive and with gen AI and importance of this topic and how it's evolving and the concerns behind it. Lucas and I talked and we thought that what we did the best way to harness the power of gen AI and use it for the goals that we have is to be able to make the program courses more inclusive and that's why we developed this workshop. And there are so many concerns around the time that faculty needs to spend in order to make things more accessible and inclusive and how to you integrate different strategy to use UDL. And that's why we are asking you to think about different strategy and if you can share that in this link in this padlet with us to see that how you're planning or you have been using AI or planning to use AI to implement some of this or integrate some of this principle into your course. So if you're thinking about using it for images to make your content more inclusive or is it true some of this strategy that you're going to use in your activities and for engagement. I would appreciate if you can use the padlet the link to insert your idea and add your ideas to this. As I mentioned you already have access to the work. The document that has all the prompts as well as the resources. And this link will be also available. So feel free to put your thoughts and idea. If you're not following out today or you're running late you want to get ready for another meeting. Please come back to this template we will love to hear from you and collect some of the examples and ideas, and to share with others. If you have any questions or ways for us to improve this workshop please do not hesitate to contact us. I'm sure it's shared here Lucas. Do you have any other comments before we finish? No, I think that's it. Thanks everyone for joining that was a it's always such an interesting workshop and a privilege to get to learn with all of you about this. We'll stick around for a few minutes if you have any last questions and again we'll follow up and thanks for joining. Thanks everyone for joining.