 I think we're all here or just about all here. So we'll get started. Good morning, everyone. Thank you so much for joining us. As Alisa is reminding us in the chat, the session is being recorded. So feel free to turn your camera off or if you don't wish to be immortalized on this video. We are very grateful to have you join us for this conversation. In a moment, we'll give you an overview of the session, but as we begin, I'd like to move to acknowledging that UBC and the homes from which we are presenting today in Vancouver are on the traditional ancestral occupied and unceded territories of the Squamish, Musqueam, and Silewa-tooth nations. We invite you to put your own territory acknowledgements in the chat if you're joining us from another location. Recognizing that such acknowledgements must be paired with actions. We do so with reflection on how teaching and learning with integrity, as we consider today, must always engage with the histories of education as colonial, with recognition of harms done through these practices, and we commit to ongoing learning, rethinking and active disruption of those legacies. Inslee, can you take us, take it away? Great, thanks, Lori. So my name is Inslee Rouse. I'm the academic integrity senior manager in the office of the provost and vice president academic. In November, 2022, OpenAI released chat GPT, a generative AI tool that could interact in a conversational and even human-like manner. Immediately, concerns were raised about its impacts on education, on assessment, on the essay, and the response from higher education was divisive. Today, six months later, universities around the world had to contend with consistent change, almost daily news, and new developments and expansion in the area. Impacts on teaching and learning have, in fact, been a very strong area of public interest, both within and outside of the academy. However, from early calls to ban these technologies, the general discourse has moved towards discussion of how to integrate them into education. We've moved from policies towards guidelines. So our session today, roughly six months after chat GPT's release, seeks to provide an introduction and a framework for thinking about these tools. We designed this as a companion session, doing a session being offered this afternoon, which will focus more on specific applications in the classroom, entitled, Exploring the Opportunities and Ethical Considerations of Generative AI and Teaching and Learning. And that's from one to 2 p.m. today on Zoom. And we also designed this as an overview that would touch on a wide variety of topics from more information on how these tools work to what they could look like in the classroom. Our session will also be a valuable moment to hear questions coming from the UBC community, as these tools are still very new and still evolving very quickly. If you submitted a question in advance, we have read them, we have tried to integrate them, but please do add your question in the chat or raise your hand if you have another question at the end or the same one. So we've brought together three perspectives today to provide information on different approaches, challenges, opportunities and impacts of chat GPT and generative AI tools. First, a technical introduction from Dr. Varot Schwartz, assistant professor in the computer science department and CFAR AI chair at the Vector Institute. Next, an overview of institutional guidance and resources that are available. From me, Ainsley Rouse, academic integrity senior manager in the provost's office. And finally, ideas and strategies for how to approach AI tools in the classroom from Dr. Laurie McNeil, associate head undergraduate and professor of teaching in the department of English language and literatures. We've left considerable time at the end for questions and discussions to hear from you, because while we are six months in, this is still very new, it's still changing and conversations are ongoing. As we all know, AI tools have the potential to change how we teach, how we learn and how we work at UBC. So I invite Dr. Varot Schwartz to start this panel with a technical overview of these tools. Thanks, Ainsley. So, I'm gonna start with the demo, even though I'm assuming most people on the call have already had a chance to play around with chat GPT, but if you haven't, this is what it looks like, it's a chat, and you can ask it questions, you can ask it to write about certain topics. So in this example, I just asked it to generate an outline for a talk that I wanted to give just to test whether it knows about this topic. And it generates a pretty good outline for the talk, including the introduction about the topic, the approaches, the challenges, et cetera. And I haven't actually followed the advice and tried to create the talk based on the outline that it generated. So I don't know if it's perfect, but it's definitely impressive in its ability, both to understand my question that I wrote in natural language, in English, and in the coherence and relevance and grammaticality, and basically a human-like text that it generated. And this is pretty new, this is something that in the natural language processing community we've only made progress on in the last few years. And so chatbots like chatGBT are a language model. And so I want to let, I want to tell you about how language models work in general and mention a few of the popular models and then focus on their effects on education from more from the technical perspective. So a language model is very much like the autocomplete in your phone. It is a function that gets as input text, which is called a prompt. So that could be the beginning of the sentence or it could be a question that should be followed by an answer. And the output of this function is what is the most likely next word in English in the case of an English language model to continue this text? Or more precisely, it is a probability distribution over all the vocabulary, the entire vocabulary, English vocabulary. And so for example, if you have an input like parrots are among the most intelligent, then some of the most likely words to follow are either birds or animals. So they would get a pretty high probability, but then completely random and unrelated words like coffee would get near zero probability. And so from that, we can actually use that to generate text by inputting a prompt like parrots are among there. This changes the internal representation inside the language model, which is made of neural networks. And then it can output the prediction for the next token. We can sample from this distribution, get a word like most, and then we just continue in a loop. So basically, feeding this word back into the model is input and predicting the next word and so on until we want to finish generating text. And so that's the basis of a language model and this technology has been around for a while. And GPT-3, which is the model that came right before chat GPT does exactly that. And the way that it, so machine learning models have two steps. The first step is the step where they're trained to perform the task that they can perform like predicting the next word in the sentence. And then the second step is the step where they're already trained and then users can interact with them. So for models like GPT-3 or chat GPT, their training step is basically reading all the English text on the web and trying to predict the next word in the sentence. And as a result of being exposed to so much text, they learn a lot about the English language. So for example, if they see a sentence like parrots are among the most intelligent, then they know that this sentence should be followed by noun. And at a higher level, they also know that this sentence should be followed by a category to which parrot belongs. So birds and animals would get a higher probability than mammals, for example. And in addition, they learn to generate coherent and human-like text. And chat GPT, that's just a technical note, but chat GPT is based on GPT-3. So it's also a language model, but it was additionally trained with human supervision to do both, to follow natural language instructions. So if you tell it something like write an essay about something or summarize the following text, it will know that it should follow an instruction rather than generate the continuation of that instruction. And it was also trained to provide more helpful answers. So there was a lot of human supervision to make it very helpful. And we can see that it is indeed a lot more helpful than the previous models. So chat GPT is a general purpose chat bot. It can answer questions about various topics. It's pretty impressive, both in its breadth and its depth of what it knows. And it's one of the features that were helpful and different from the previous models is that the open AI programmers added some filters to prevent it from generating offensive content, which previous models didn't have. But then there are a lot of limitations with chat GPT that are important for us to be aware of when we use it. One of the major limitations is that even though it always looks very confident in its answers, the answers very often are incorrect or at least contain some elements that is not entirely accurate. It is okay if you're asking it about something that you know, you will notice that, but if you're a student using the model for your homework or a person using it for work or an instructor using it to create content about something they're not entirely familiar with, you might actually miss that. So you need to know that you should take the answers with a grain of salt. It makes up facts, it makes up references, sometimes it is problematic. Now recently it was also incorporated into the search engine Bing. So now it no longer makes up references and but it still sometimes adds a footnote to an existing reference, but the existing reference still might not support the or contain the information in the summary produced by Bing. So there's an existing problem of making up things. There's a lot of other limitations. I'm quickly going over them. It is very much like the smartest and least smart person that you know because on the one hand it knows a lot about the world, it knows about many different topics, but on the other hand sometimes when you ask it any question that requires a little bit of reasoning that we as humans are very much capable of, it will be wrong in ways that humans are not gonna be wrong. So that's another problem. And then there are privacy issues, both in terms of what the data that it was strained on, we don't really know what it was strained on. It's a lot of data that was on the web, but some of it was maybe more personal, more private, under some license or behind some license. And in addition, it also records everything you put into the chat bot. So if you put any kind of personal information or student details or anything that shouldn't be recorded by third party models, then you should know that it is. Okay. It also doesn't understand language the way that humans do. And it doesn't, so some versions of the model can't answer questions about recent events, other versions can. And the offensive language filters, while they are good and there exists, the fact that they incorporated that into the model is good, they could still be bypassed with simple tricks. Let's focus on the effects on education. And I wanna start with my personal take. I think that we as instructors need to rethink how we evaluate students, and for example, redesign our assignments so that they're not gonna be easily solved with chat GPT. But at the same time, I don't think the focus should be on, I don't think we should bend the tool. I don't think we should try to chase students to see if they cheated. I think it's more, we should focus our energy more on using these tools to make learning more engaging and to enhance their learning. And I think that bending chat GPT is futile. I think that it's probably similar to the problem that we faced years ago when the internet was new or when Wikipedia was new, and it is challenging and it does change our assignments, but I don't think it's really feasible to prevent students from using that. They're gonna use it anyway. And I also don't think that we should rely much on detection tools for plagiarism detection. And so I wanna say a few words about these detection tools. So these are tools that are designed to automatically detect whether a text was AI generated versus human written. And there are different types of detectors, but they're all essentially based on recognizing the statistical differences between AI generated and human written text. And as these language models become better and better, it is actually becoming harder to differentiate or to find statistical differences between the two. So the detection tools are actually getting worse. And they have other limitations. So for example, some of these detection tools are based on having access to the actual language model, being able to change the language model, which is not the case with proprietary language models like chat GPT. They're also model specific. So if a teacher is using a detector to detect plagiarism with chat GPT, and there's a student that is slightly more sophisticated and knows that they can just use another language model and it wouldn't be detected by the specific detector. I'm more concerned about the accuracy issues in terms of false positives because it's not 100% accurate. So there could be cases where a student didn't actually plagiarize and didn't use the tool and would still detect it as if the student did use the tool. And as I mentioned, the students could be sophisticated with their cheating. So if they are aware of these detection tools, which they probably are, it's very easy to hack it. So they could just use a model like chat GPT to write the essay for them and then post-edit it so that it doesn't look like it's AI generated and it won't be detected by the detector. So overall, I find these detection tools very limited and I think our energy should be focused elsewhere. Thanks. Thank you, Varad. I'm gonna jump in now and talk a little bit about some of the institutional guidelines and resources on chat GPT and generative AI tools. So as we all know, if we follow the news at all, no sooner than these tools were released than instructors, institutions and students began to wonder what kind of rules, regulations and limits might need to guide their use. For most, this was all very new and the speed at which the tools were developed made it hard to keep up at times. The topic has been changing consistently and what we are discussing today, it's important to remember is a small but very significant corner of it where this new technology meets teaching, learning and academic integrity. At UBC, as you can see on the slide, there have been several resources and information that's been brought together since the release. There was an FAQ page created on the academic integrity website. You'll see some links in the chat that assembles some frequently asked questions for both students and instructors. We also invite you to submit more questions to that if you don't see your questions reflected and there's a feedback form. Oh, there, prior slide, please. There's a feedback form that is on that page. Secondly, there's an academic integrity newsletter that focused its first issue on generative AI tools. Thirdly, the CTLT, and this will be discussed this afternoon has released a resource called Assessment Design in an Era of Generative AI. And finally, we're hosting a discussion, a faculty forum on the 13th of June called Navigating Generative Artificial Intelligence Tools in the classroom, Faculty Approaches. So there's several things that have been released and there's conversations that are started and ongoing. So in this section, I will review some of these guidelines and resources. The FAQ is a very good place to start. It includes information about instructor use, student use, misconduct, detectors, citation, among other factors. So today in the seven minutes that I have, I'm going to talk more specifically about institutional guidelines, is there a ban on AI tools, AI tools and academic misconduct, and lastly, AI detectors. Next slide, please, Ferd. So should higher education institutions ban chat GPT? This was a very prevalent question in the early days after the release. And some of these early restrictions did happen, school boards, some institutions, even countries setting limitations around use. And the question of sweeping restrictions was very present in the early months of 2023. But at this time, most institutions have taken the approach of realizing that these tools might have potential with proper education and guidelines. And the vast majority of higher education institutions have moved from reactive to a more flexible, responsive and educative approach. What we're seeing most often is promoting an open approach to AI tools which has acknowledged intentional and ethical. There's been a symbolic shift, as I mentioned, away from restrictive policies, more towards creating teaching and learning guidelines. If you Google, guidelines from universities have proliferated in the past six months, they deal with topics like how to communicate with students, how to include this in teaching, how to exclude this with teaching, as well as ideas for some of the ways that students can use these tools. So I wanted to repeat, as you see on the slide, at this time at UBC, there is no institutional ban on generative AI tools. And the management of these tools is a course and potentially program level decision. It's a matter of individual instructor choice, recognizing that there might be, in some cases, additional program level constraints or considerations to be taken into account. Now, because this is a course decision, these kind of guidelines and resources and sharing experiences is especially important at times like this, when instructors are facing a big decision about how, if, when, why to include artificial intelligence tools in their courses. Further along in this presentation, Lori will be reviewing some of these potential approaches and choices that can be made in the classroom. And this afternoon's session as well will offer a hands-on approach. So next slide, please, Ferd. The fact that there is no institutional ban brings up many other questions, opportunities and challenges. One of these has been how the use of AI tools intersects with academic integrity and academic misconduct. So to begin, I will repeat what's on the slide. The use of AI tools does not automatically equate to academic misconduct, although these might, these can be used to cheat. They can also be considered academic misconduct potentially if they go outside of the bounds that are established by the instructor for their use. So what are the policies that guide this currently at UBC? The academic misconduct regulation, which is found in the academic calendar section entitled discipline for academic misconduct does not mention artificial intelligence tools, but that does not mean that they might not be considered misconduct. The use of these tools could intersect with a variety of prohibited behaviors, including accessing unauthorized resources or plagiarism. The key thing being that these tools could be used for academic misconduct, which is defined as any conduct by which a student gains or attempts to gain an unfair academic advantage or benefit, thereby compromising the integrity of the academic process. On the chat GPT FAQ page, which I mentioned earlier, there are three scenarios that are outlined. These are, next slide please, Vera. The first, where the use of AI has been prohibited by the instructor. The second, where the use of AI has been allowed by the instructor. And the third, if the use of AI has not been discussed or specified by the instructor. So I invite you to take a look at those scenarios, which are outlined in terms of does this equal academic misconduct? So that's some language that is available now for both students and for instructors. In these scenarios, you'll see there is a role for the instructor to take in communicating around whether these tools are allowed within what limitations and why they have or have not been allowed. In many ways, this is not vastly different from academic integrity best practice around other tools that might be allowed or not allowed in the classroom specified in the syllabus. And it's also connected to clarifying academic integrity expectations broadly speaking. Because in many ways, these new tools, just as the pandemic did before them, draw attention to a return to basics of academic integrity. As notes academic integrity scholar Sarah Elaine Eaton, the technology is not the problem. So any strategy aimed at drawing awareness to chat GPT should return to academic integrity basics around assessment design, understanding why students cheat in the first place, reminding them why they should not cheat, and then reminding students and faculty about academic misconduct rules, regulations and resources. On that note, UBC has extensive resources to support instructors and students in maintaining a culture of academic integrity. There's a new central academic integrity website which will be shared in the chat. We have a new academic integrity hub in the provost office to support students and faculties and instructors around academic integrity. We have a promotional campaign called Take Five for Integrity, which can either be a request for an in-class presentation about academic integrity or downloading resources, a slide deck and notes to deliver this presentation in the class. And on our website, you will find guidelines around academic integrity syllabus statements, modules and other resources. So all of this can be found in one place. So the last thing that I'm gonna talk about is detection. The topic of misconduct obviously brings up the topic of AI detectors. Now, Varad outlined these more from a technical perspective, but they've popped up alongside this new technology. You might have heard of some of them that received more press than others. GPT-0, which was developed by a Princeton undergraduate student over winter break. The AI classifier, which is OpenAI's own classifier tool and also turn it in AI detection feature, which has been made part of their similarity report. So of course, tech-based solutions might appear tempting at first, but it's important to remember that these tools might be unreliable and untested, as Varad pointed out, could lead to false accusations in some situations. And as I just pointed out, there are also lots of best practices and strategies to avoid getting to this place in the first place, going back to basics around academic integrity. So here very briefly, I would like to draw attention to guidelines around two key areas. First, around UBC's decision not to activate Turnitin's new AI detection feature. And secondly, around more general AI detectors that are openly available on the internet. First, Turnitin. On April the 4th, Turnitin released a new AI detection feature as part of their similarity report. As you all may know, UBC is a Turnitin client, but is not enabling Turnitin's AI detection feature. So while you will have heard of this and know that UBC subscribes to it, the feature is not available for use at this time. You can find the full rationale behind this decision on the Learning Technology Hub website. It outlines the reasons and factors taken into account for this decision. These include the inability to review and validate the feature, the little advanced notice that was given around the release, the accuracy testing being in its early stages, and the inability for instructors to check suspected passages against source material, which in this case would not exist per se. Testing for potential bias being in early stages and also the results not being available to students. But Turnitin is not the only AI detector out there. There are other detectors that are accessible and are available on the internet to students and faculty and anyone who wants to use them. There is no UBC supported detector and it's very important that users, if they choose to use these, recognize that they might be fallible and that they might be untested. If an instructor still chooses to use these, they should do so with full awareness and understanding of the limitations. As an institution, UBC has not had enough experience with these systems to be able to support them. And instructors might wish to review UBC's response to Turnitin and the AI detection feature and really think about some of the challenges that were raised. What we've been reiterating is in no cases should this be used as a sole decision-making factor. So then how can I detect or an instructor detect misconduct when it happens? If an instructor suspects that an assignment or an assessment has been completed with unauthorized use of AI tools, they should proceed as they would for any potential allegation of misconduct. In most cases, and this is important because this happens differently in different places in the university per the regulation, there might be something in the assignment that prompts an allegation, some a hunch around misconduct. The academic misconduct system is also set up to give students the opportunity to explain an allegation. So for an overview of the misconduct process, I invite you to also consult the academic integrity website where there's an outline of the academic misconduct process for students and for faculty. I'm gonna pass it on to Lori now who's going to talk a little bit about generative AI tools in the classroom. Thanks, Ainsley. I begin first by wanting to acknowledge the potential wrecking ball effect of gen AI, chat GPT and other forms of generative artificial intelligence. I'm a little bit concerned that the panic about this kind of AI is a threat to all of our hardware and lessons about educative approaches to the other AI, academic integrity. We've learned so much about creating and maintaining a culture of integrity. And I worry that this perceived threat may cause a return to the default rule-based punishment and suspicion mode, the assumption that all students will cheat and the reality that some students will be disproportionately accused of misconduct. So key for me in thinking about teaching and learning in the chat GPT era is that we have to remember, we've already learned about how to foster and maintain cultures of integrity. And then we can draw on our many resources including the ones that Ainsley has just outlined but also including our students to keep learning and adapting to address this new development and keeping with our panel's introductory approach and our multidisciplinary audience. What I'll do in my time today is outline principles and practices in a fairly high-level way and then share some ways to take those principles into your classrooms. I'm happy to speak more specifically about particular tweaks to assessments and applications in the Q&A. Next slide, please. I see five principles as foundational to an educative approach to teaching and learning with or in a time of generative AI or Gen AI including chat GPT and its many competitors. These are, as Ainsley has noted, really the same principles that underpin a general educative approach to academic integrity. And I hope we'll find that reassuring. We already have effective frameworks and strategies that will help us as we go forward. These principles include the work of learning about Gen AI as you are by coming to this panel. For some of us that may be getting a handle on a technology that's unfamiliar to us. For some of our students that may mean thinking in different ways about technology as not neutral and about the choices and responsibilities they have of users of this technology. These principles also invite us to keep committing ourselves to giving our students the chance to make informed decisions about Gen AI in relation to our courses and beyond. Ideally, we'll invite students as partners to help us redesign with AI in mind allowing them to invest in their own learning and its evaluation. Such an invitation directly intervenes in the adversarial or policing dynamic that AI versus AI might reintroduce. As always, whatever choices we make about whether or not we see the use of Gen AI as aligned with the learning goals of our courses and that really needs to be the driving factor. We need to communicate that expectation to our students and engage them in conversation throughout about how and why. Ideally, our decisions about how to respond to this development, whether to opt in or out and the changes we might make to our assessment practices as a result will be informed by and model the ethical elements that are core to an ethos of integrity. That includes being mindful about accessibility implicit bias and also the technology itself. And I'll say more about that in a minute. So let's make sure that we aren't undoing those efforts to address systemic ableism, racism and other biases as we proceed. So how do we put these principles into action? Next slide please. I propose that we create spaces for learning and conversations with our students, yes and also with each other. We all need to understand how Gen AI works as Varad has explained but also think about what and who is making this technology work. Not only the issues of whose content is being scraped without attribution but also the exposés of working conditions. For example, of the Kenyon gig workers who were hired to scrub offensive material that Varad mentioned. We can also think about how our data and any content we upload is then used by Gen AI companies as we heard today. As Varad has noted, we all realize that Gen AI confabulates. It makes up stuff when it doesn't have the answers and our students really need to understand this and this and the other limitations we've learned about will certainly change but we need to recognize and have our students reflect upon the fact or the implications of working with a platform that uses that draws only on digital sources, not digitized but born digital sources. And that of course, most of those sources reflect Western English language perspectives whose ideas and attitudes are being reproduced as neutral as norms when these tools make predictions about what to say and how. These, this updates concerns raised about algorithmic biases throughout the 2010s. For example, in Sophia Noble's algorithms of oppression and with additional urgency as they become increased in the ubiquitous. Naomi Klein's recent piece in The Guardian captured concerns being raised more broadly along these topics. Students need to understand these limitations and then be accountable for what Gen AI contributes to their work, whether that's fact-checking or content-checking or documenting its role in whatever they produce. Hasara Lane Eaton notes, humans can relinquish control but not responsibility. I'll close by suggesting some ways to embed this kind of collaborative learning into our courses. The next slide please. Online, in person, individually or in groups, let's make spaces and opportunities for students to read, write, post, talk about what it means to use Gen AI. You can ask them to work through the issues I just raised to do their own research and share and think about their findings and also extend those considerations with for example, a case study, what Autumn Keynes calls a techno ethical audit or the reading and discussion activity that Sarah Eaton has shared on her blog in which I've listed here. The guiding question really needs to be the last one. How can we use Gen AI with integrity? Which might mean in some cases and in some courses, not using it at all because we find it is antithetical to the learning our students need to demonstrate and the proficiencies they need to develop. Next slide please. I'm returning to this principle of clear expectation to suggest places such as the syllabus and also on every assignment where we can make our expectations explicit. You might in fact decide that its use is okay in your courses and assignments but ideally you'll provide some parameters that will guide its acceptable use. How much Gen AI and in what forms? Like is it okay to brainstorm with Gen AI to outline to have it make your tables and spreadsheets? It can do coding poorly as I understand. If it is used, where do you want students to document that? In a method section or acknowledgments as co-author, you might need to look at emerging norms or guidelines in your discipline or profession. For example, the journal Science does not allow authors to submit pieces with Gen AI as a co-author because it cannot take responsibility for what it has produced. Provide that explicit guidance and explain what will happen if those expectations are violated. I would also encourage us to identify through discussions with students what parts of an assignment might make them turn to Gen AI even if they aren't supposed to. In my own project on academic integrity in first year courses, we discovered that our expectation as instructors of originality was totally confounding and in fact terrifying to many students, a real barrier to them doing their work with integrity. Similarly, Sydney Dobrin and others recommend giving students more time and instruction and coming up with their own ideas so that they don't need to transfer that work to Gen AI. So we have opportunities again to improve the learning of our students by looking at these pain points. Next slide please. Finally, I highly recommend taking a team and a discipline-based approach to rethinking our courses and assessments. Help each other think about the opportunities of such a revision and to be mindful of potential barriers or harms we want to avoid introducing and perpetuating. For example, in my own department, we struck our own subcommittee to think about what it means, what the implications are of Gen AI for teaching and learning in English language and literature courses. And one of the things that we talked about was the potential issues and barriers that could result from shifting all of our assessments to in-class handwritten responses. And we came up with ways to mitigate those issues, recognize the potential ableism and other challenges and offered our colleagues alternatives or ways to sort of reduce those impacts. I would suggest that it's far more productive and indeed far more human to do this work together. That comes to the end of our formal presentation. I think there is a next slide, Vered. Yes, okay. So what we have tried to do in our three presentations is just lay out, bring sort of a common understanding of the major issues, concerns and the technology itself. We know you have lots of questions. I have just seen the chat moving up out of the corner of my eye. So we'll turn to questions now, inviting you either to put up your hand or we'll start drawing from the chat. So we have Jared Taylor who's supporting us in the question period. If you do have a question, please raise your hand or else we can start with some of the questions that have been in the chat. So Jared, maybe let us know where to begin, please. Okay. I see you. So there was a, oh, sorry about that, Lori. There was a earlier question from Miti and let me just paste in and read it out. So I suspect this might be for Vered, but how easy is it to create open source models that have no limitations implemented and that are more unique? So I think some of the limitations are more difficult to address right now. And then others. I think that right now companies are already working on the factuality problem. So by incorporating these models into search engines, you can, it doesn't completely solve the problem, but you could do some kind of fact-checking on the output of the model. It's still a difficult problem. It's not a straightforward, there's no straightforward solution to that, but I know that's something that companies are probably pretty invested in right now. So, but the other problems like being able to to do a bit more of advanced reasoning. So if you just ask it to summarize some topic, if there's data on that on the web, it's probably gonna do a pretty good job minus the inaccuracies that are sometimes that sometimes appear in the output. But if you ask it any kind of question that requires a little bit more thinking, that's actually not guaranteed to, it's not guaranteed to quote unquote understand it because of the reasoning problem that I mentioned earlier. When we read a question or we read a paragraph with some information, language is actually very efficient. So it's often under specified and we don't provide more information to the reader than we think that they need. So we don't repeat things that are common sense or things that we think that other people already know. So there's a lot of reading between the lines that we do and these models are not necessarily always capable of doing that. So sometimes they would make, that's why I said that it's both the smartest and the least smart. It's like the smartest and the least smart person that you know, because on the one hand, it knows a lot about many different topics in the world, but on the other hand, it could sometimes make a really silly mistake when any kind of reasoning and reading between the lines is involved. So some of these problems are more difficult. I don't know that, yeah, it's hard to predict when or whether it's easy to or straightforward to solve them in the near future. Thanks, Farid. I've got to look on the chat now. So I'm just gonna run through a few of these questions if that's okay for everyone. I might jump to Tiffany's and then sort of move around. So Tiffany Potter asks in the chat, Laurie, could you please briefly share with everyone the recommendations you made to ELL? Yeah, I was just trying scrolling to find, see if I could find the document and just pop it in the chat, but I can't apparently multitask. What we did in that group was bring together four or five faculty from across our department, across ranks to think about the particular kinds of assessments and the kinds of learning that we want our students to demonstrate in English language and literature courses and then offered some recommendations about whether if you want to use it and how you might do that, but also how to use it with integrity. So if you give me a moment, I'll see if I can find the document and I'll just share it in the chat. Great, thanks Laurie. I'm going to scroll to, I see Katie Marshall, I helped put together this opinion on ChatGP for a journal. Great, thank you for sharing that. Melanie Rivers, if you come across an inaccuracy from ChatGPT conversation and correct it, does it learn from this? So I think, Vera, this one's for you again. Yes, so it is capable of learning from interacting with users. So if you ask it to generate an answer for something, you then have this upload and download buttons or if you ask it to regenerate an answer, it then asks you, was this one better, this answer better than the previous one? I think they do have some mechanism to learn from interaction with users. But again, if it's an inherent problem of being able to reason about the world, it might fix that specific question, but not the problem as a whole. Great, thank you. I'm going to jump back up to the first question that was posted. Yvonne Hopkins asks, what guidelines are there for faculty and staff using AI tools? So I apologize, I should be inviting those who have posted in the chat to see if they'd like to add anything to that question. So I'll do that moving forward. Yvonne, is there anything that you'd like to add to your question? I'm not particularly, and more of my question revolves around people using it, maybe even to be helping develop curriculum or the work that they're doing in regular admin work. Is there any guidelines that have been put out for that? I don't think we've had official guidelines from the institution, but I have in my reading about gen AI come across recommendations and cautions that if you're, for example, doing some of your course management, including correspondence, that you just have to remember you can't put anything that identifies a student into that database because it then is taken up by the model. So I don't know whether these models are FIPA compliant, but just, we just need to be aware that even if we did like a mail merge or something that all those student names and identifying materials will go right into the language models. So we shouldn't use it for those kinds of things. I'll try to speak to this one a little bit as well, because I think it's a very good question that is on the minds of a lot of people. We focused today on, I read your question twice about, and then I realized there's two different meanings there, but we focused on faculty use in teaching. So faculty use as a teaching tool, faculty use with students. What we haven't talked about is faculty using it for their own work or staff using it for their own work. So to my knowledge, that's still very much an ongoing conversation. And as Lori said, I'm not aware of any specific guidelines around that. I do think that a lot of the resources that have come out contain elements that will inform that, whether it's around privacy, whether it's around consent, whether it's around sort of ethical uses. So I think there are elements of informing general use, but I'm not aware of specific guidelines per se at this point. Okay, I'm going to continue to move down the tracks. I see that that's where the most of the questions are happening here. And I see Marina Adshad has asked, are there resources and support available for departments that form committees to develop field-specific guidelines? So great question. And Marina, did you want to add anything to that or should we just open it up to the group? I'm happy to have it opened up to the group. I'm trying to do this in my own department right now. We have a couple of initiatives, some of which require funding, which department has provided. I'm just wondering if there's anything more general that we could draw on. I wonder if that would be a good candidate for a student as partner. It's not TLEF, but there's a student as partners CTLT grant. You know, we're thinking about that as a really good place to bring students into the conversation and support that kind of work. Ainsley and Perid might know of other sources of funding, but I think typically it's the, it's at the funding at the level of the department, but at TLEF, and so in addition to the student as partners, another source might be a small TLEF for the coming year. That's great, Laurie, thank you. I actually have a little initiative running in my department, which I'll write about in the chat because it's fun, it takes a small amount of funding, but it's actually, it seems to be going quite well. But I won't take up time here with it. Thanks, Marina, that would be great if you could share that in the chat. I would just add that on our FAQ page, we do have a list of sort of funding for research in the area, so some of these might be relevant, some of them might be not relevant to this particular question, Marina, but it's worth, we've listed a couple of different sources there, but if we're talking about sort of more general support, that's not financial support, I think this is also a good question for the faculty forum. We have the faculty forum on the 13th of June, where we're really trying to get a sense of and discuss and share experiences around how different faculties are addressing this at the department level, at the dean's office level, at the course level. So I hope we can bring this one back to that event as well. Okay, let's see there. Did you have anything to add to that or should we move to the next question? No, we can move to the next question. So I'm going to go down to, oh, this is a good one, why not? Shauna, because I think we have some people as guests, could you tell us a little bit about this afternoon session? So I saw Christina and Lucas in the room, sorry to put you on the spot, but could you mention very briefly what this afternoon session is going to cover? Sure, yeah, so hi, I'm Christina Hendricks, I'm the Academic Director of the Center for Teaching, Learning, and Technology and Lucas and I and several other folks are doing a session this afternoon, where we'll be talking about various AI tools, including chat GPT, but also several other ones, a bit about some ethical and privacy considerations, a bit about limitations and capabilities and then also some discussion of dealing with assignments and assessments. So some similar things as have been discussed here. Thanks. Thanks, Christina, sorry to put you on the spot there, but thanks for chiming in. There are a lot of questions here and one thing that I do want to mention is that what we'll do is we're tracking these questions and if we can't answer them now, we'll find some way to answer them and whether it's that we integrate them to our FAQ or we follow up in some way, if we don't get to your question, it has been taken into account and hopefully it can continue to influence some of the resources that we are developing. So I'm gonna scroll down a little bit to a question from Arnold O'Penney. Is there a difference in the accuracy of responses generated by the AI powered Bing search tool versus chat GPT's output? Now, Vered, I think this one is for you. Yeah, I've seen some helpful answers in the chat. So yeah, one of the main differences is that Bing has access to the web and so it can answer questions about recent events, whereas the chat GPT was trained on data up to 2021 so it can't answer about anything after 2021. Another major difference is that when Bing provides some answer, almost every information that it provides has this footnote with a link to a URL that is supposed to support this information. I don't know of any quantifiable differences in accuracy. I would assume that Bing is more accurate because of the access to the internet, but at the same time, it doesn't solve the problem of making factual errors and I find it sometimes even more risky to use Bing because it seems even more confident by providing the URLs, but it has been shown that even sometimes it provides actual URLs that discuss the actual topic of the question, but then they don't provide, they don't have any, that the articles that are linked don't actually provide the evidence to the answer that is generated by Bing. So I find it's easier to actually rely on Bing when it still suffers from the same problem of making up things. Thanks, Vered. I'm gonna jump to a question from Sharon because I think it might bring together a couple of elements that we've been discussing and I see there have been some responses in the chat as well. So Sharon Jarvis asks, would adding to the syllabus if using AI, if you're using AI, it must be cited, be acceptable? So I see that some have responded in the chat, but I wondered if maybe we say a few words about possible approaches in the syllabus. I'll mention that in the CTLT resource, which we linked to earlier, there is a section on communicating with students, which provides some examples of different syllabus approaches and ways to do that. And Lori, you might have a few words to share about that as well. Sure, thank you. Although I think, again, as Ainsley's noted, there's been excellent responses already in the chat. I think that citing gen AI may or may not be sufficient because you may need to better understand what gen AI has actually contributed more explicitly. So as the responses in the chat suggest having students outline exactly what they used it for, was it for brainstorming? Did it do a draft? Did it check your revision? These are things that, so for example, as I talked about in my presentation, you may need to have include out of method section or a footnote or an acknowledgments that describe the process through which you used gen AI, because you may find it more helpful. It's also important to think about when you're citing gen AI, what exactly are you citing? Because gen AI has produced its recommendations or its predictive text by scraping the work of other people, other contributors that is not acknowledged. So I think that while it's a great start and it certainly reminds students that you expect them to account for their gen AI use, we can push that a little more effectively to have students account for and be responsible for their engagement with it. Thanks, Laurie. I think we have time for one more question. And as I noted, there are a lot of questions in the chat. Some of these are very large questions. So we have a record of all of these and just want the group to know that these will be taken into account. But I see a question from Meaty. Do you know if UBC considers providing a general course on using machine learning with integrity for all students, like a driver's license for AI? I just want to put that out there. Does anyone have Laurie? Do you have any thoughts on that? I'll say that there are a number of courses about academic integrity. We have those available on the academic integrity website, but this is an interesting idea. Yeah, I think it's a great idea with a caution. So as Ainsley's noted, we have a number of co-curricular modules that we invite students to take, but we can never farm out entirely our conversations and our instruction about anything related to what will clearly become a foundational literacy outside of our course, because we need to be asking students to think about what does its use look like in the course I am taking now, in the discipline in which that course is situated and in some cases in the professions that I'm entering through this training. So I think it would be terrific and we'll make a note of the idea of having a kind of intro module through Canvas. I think that's fabulous, but then I would encourage us to offer that and have those conversations and instruction with our students. I did see someone noted in the chat the concerns about time and what happens when we add all this content and that's always the constraint we're working against. And I don't have an answer, we need one of those things like an additional time maker that they have in fantasy novels, but I think we just can't have scorned from the responsibility of having an explicit set of discussions that could extend from a kind of primer. And I think that's a fantastic idea. Thanks, Laurie. I see we've reached 10.30. Thank you all very much for joining us today and being so active in the chat with the questions. It's really been extremely enriching to not only talk about some of them, but be able to know what questions you have. As I said, this will continue to nourish and inform the work that is very actively still going on. So thank you all very much for joining us today and thank you to the wonderful panelists, Laurie, Verette for being a part of this conversation. Thank you.