 And so I've been quite obsessive with generative AI since it came out or not, since it really emerged into the public view more last year. And I think in that time, I've done about 30 workshops on it and I've had an opportunity to do some research and really play with it a lot. And I hope to, you know, I'm not an expert in this space, but I hope to share some of what I've learned through playing and what I've learned through discussions around these tools. So today's workshop is called a 30-30 workshop and sometimes that title becomes a little bit more aspirational than actual. The idea is that it's 30 minutes of interactive presentation followed by 30 minutes of discussion. I try my best to do that, but I think today is probably gonna be closer to 40, 20, 45, 15 because I have a fair number of interactive activities for you. I will be recording the session and I'll be sharing that back with you. And I also wanna acknowledge that I'm coming to you from UBC Vancouver, which is located on traditional Musqueam territory and also mentioning that I think when we start thinking about generative AI, it's important to acknowledge that in addition to the copyright problems that these tools have created, these tools have scraped a lot of indigenous knowledge on the internet and they're sharing it and repurposing it without permission and without attribution. And that's yet another circle that I have trouble squaring when I think about what should we be using it? How should we be using it? How should we be crediting who creates these tools? Oh, sorry, the original web information that was scraped. So let's begin now and I'm gonna share slides with you. And in a moment, Audrey, who's helping me out with this session is going to add a worksheet into the chat for you to follow along with. So what we're looking at today is being co-pilot in teaching and learning. And here's the worksheet. So Audrey, if you could share that in the chat now, what I'd like you to encourage you to do is just to make the session a little bit more interactive for you and recognizing that the best way to learn about these tools is to really play. If you can follow along with me in the worksheet, I've included prompts that I'll be talking about and I've included all of the different activities that we'll be doing throughout the session. I think the idea is for you to follow along also if you wanna take it further a little bit later, you can kind of play with things. Also mentioning the slides are all CC by and I encourage you to adapt these slides and to use them in your own presentations and your own thinking about this later. So for the agenda, we're gonna start by talking briefly about what being co-pilot is and how it's been set up at UBC. Then we'll talk about different ways of using it. So we'll talk about text output. We'll talk about text output kind of from a teaching perspective and then also about teaching and learning. I'm also gonna seed in a little bit of effective prompting. Then we'll talk about image outputs. Then we'll talk about visual search or computer vision. And then throughout that, I have some activities where you can kind of try these things out. I will say before I jump into the demonstrations that I do most of my demonstrations in chat GPT-4 and as I do the demonstrations today, one of the challenges with Bing is it doesn't always do what I want it to do. And it does a lot less things that I want it to do than chat GPT-4 does. So I'll kind of put that out there right now. Some of these demos may go a little sideways on me. And I think that's a good learning lesson for when we're thinking about these tools. If you wanna put questions in the chat during the session, please go ahead. I won't be able to answer these until I pause. I also encourage you to, when I pause at different points, jump in, jump onto your mic, share some ideas, share your observations. So what is Bing Copilot? Bing Copilot is a program created by Microsoft. It's run through the Bing browser. If you want to go to it, you can log into Bing at UBC. You can log into Bing or you can use it in incognito window. Any way that you can use Bing, you can access Copilot. At the back end of Copilot, it uses chat GPT-4 as its large language model. However, Microsoft has built in some middleware into the tool. And what that means is that sometimes it's directly calling on chat GPT-4. Sometimes it seems to be interpreted through the Microsoft. I think it was it called Prometheus software and it kind of changes the answers a little bit. You'll notice on Bing Copilot, there's three different conversation styles. There's balanced, precise and creative. And these change a little bit over time in what they produce. I think what the Bing documentation says is that, use the more creative for creative output, use the precise, there'll be less hallucinations and use the balance in the middle. What I've found and what I've been reading online is that balanced is often really problematic. What I've been finding is that the balanced setting gets me the worst answers. And what I mean by worst is, I'll give you an example. I tried to use computer vision with balance the other day and I took a picture of a badminton racket. And I said, what is this? And it said, it's a tennis racket. And here's the rules how to play tennis. So it kind of goes off a little bit more as like a chat buddy or a modern clippy. So again, I tend to do most of my prompting in the precise or even creative modes. But what I have noticed is they kind of change it the way that these are called. I think what we're trying to do with these tools or at least what I'm trying to do is get to chat GPT-4 and avoid the Microsoft middleware within the tool. So at UBC, there's a couple of different ways that you can use these tools. If you're UBC faculty or staff and you log into teams to use Bing Copilot, you will have access to the enterprise level Bing Copilot. And what that means is there's more data guards. Your data won't be used for training at least according to the documentation. If you're logged in, you can ask 30 different questions in the same chat. Students are a little bit different. So currently at UBC, students are only able to use consumer level Bing. What that means is they don't have the same level of enterprise data privacy, but you don't need to require a login for it. And this can be quite important for privacy and we'll speak about that in a moment. If students are logged in, they're going to be able to ask 30 questions and it will remember them. If they're not logged in, they'll only be able to ask five questions or go five levels down. So when we think about one of the values of these tool of Bing and why I think this is going to really change our thinking about Gen AI at the university is Bing has had a privacy impact assessment at UBC. And what that means is that we're able to recommend it in the class with caution. So for example, in a classroom assignment, we can recommend that students use this tool as a classroom assignment. We don't have to give them an option that doesn't require their use of Bing. The option we can give them is to use it without logging in. The caveat there is the information, the data they put in is still going into Bing training data, Bing servers, et cetera. And we need to be very cautious that they're not entering personal information into the tool and kind of a note there. I think generally with students, there's a lack of critical literacy around these tools and we need to help students understand that what they're putting into these tools, what they're uploading is not private, what they're taking pictures of. There could be leaks in these tools. It could be used for their training data as well. So on the CTLT AI website, we've included this language. If you're interested in using Bing in your course, that you may want to include in the syllabus. So Bing Chat consumer can collect basic information about you, such as your IP address, what you search prompts and documents you upload or view on your browser. It can store information for up to 18 months and you may not be able to delete this information. Do not enter PII into Bing Chat or information you would not just share publicly on the internet. And I think this is what I'm finding at least with generative AI and this is hard to keep in my head. I used it over Christmas as a counselor, which would be terrible if it leaked out. But understanding that you may want to not share things that you would be uncomfortable if they, if it became public on the internet. So I grabbed this page from a blog post that I'll link out to you later. And again, thinking about the idea now as faculty, we have an opportunity to use this in our classroom without violating students' privacy, if done well. But it's worth thinking about how we're going to assign this to our students. So this is from an article that took a whole bunch of different syllabus statements and kind of curated them and found what effective syllabus statements have. And I really like this idea of how we can communicate out in a recent student panel that we had as part of the Gen AI Symposium. One of the things the students said is, we really need to understand how to use this in our class. If we don't understand how to use it, when we can use it, how we can use it, there's a good chance we'll start using it anyways and perhaps in ways that aren't appropriate. So state the reasons for AI use or prohibition. And I think this is worth thinking about our assignments. When can they use it? When shouldn't they use it and why not? Number two, what are specific examples of acceptable and unacceptable AI use? Number three, again, this brings in the critical literacy. What are the ethical concerns with using these tools? What are the privacy concerns? If we're thinking about students generating AI art, and we know that artists right now are really pushing back, they're even poisoning WebR so that AI can't scrape it, what are the copyright implications and how are they able to reconcile that? Students need to understand about the accuracy and the lack of accuracy of the tool and the need to have a human in the middle when they output them, as well as the environmental concerns. Number four, specify documentation. So if students are using it, what do they need to tell you? Do they need to include prompts? Do they need to include what tool they used, what version? And how might they cite it? Number five, what are the consequences of misuse? And if there are questions, ask you for clarity. So that's kind of thinking about being as a classroom assignment. And now what I wanna do is move through the tool itself, do some demonstrations with you and think about it in three buckets. So there's three capabilities that Bing Copilot has. One of them is text output, one of them is image creation and the third is computer vision or visual search as Microsoft calls it. So let's start with text outed output. And this is an area that you probably are most familiar with with these tools. So the first thing, and I'm gonna kind of move back and forth between demo now and slides. The first thing and way that we might start using this in class is we can access live URLs with Bing. And this is a little bit different than other tools, chat GPT 3.5, you're not able to access live URLs but you can do that with Bing chat. So I'm just gonna go to Bing now and I'm gonna go to Copilot. And I'm going to just, I'm gonna grab a URL online. I like to use CBC and I will say that when you are getting the tool to access URLs there's been a number of sites that have blocked AI scraping now as part of their IP. A lot of journals will do this. For example, I can access JAMA articles but I can't access other journals in many cases. So keep in mind it may fail for you. The other thing to mention is that sometimes Bing seems to read the URL in the wrong way. I'm not quite sure why and it annotates a different URL. So what I'm gonna ask it to do is annotate the following article and create a citation in APA. And I'm gonna put in the URL and click enter. And I've been experimenting, it can view websites. So it got the article right in this case. It's making links to the article. It's giving me a quick summary of the article and now it's giving me a citation in APA format for the article. Of course I would need to check this but I think this kind of opens up at least in my own practice. It's opened up the idea of being able to quickly get annotations for articles, read summaries of them and we could transform this article now a little bit. So I'm gonna ask it to make a table of key terms from the article and translate them into Spanish and let's see if it can do that. I find Bing is quite good for making tables and I'm gonna show you how those export in a minute. So now it's giving me English terms for the articles. It's translating into Spanish. I've worked with Punjabi, had it work with a number of different languages. So you can think about students in your class how they might find different online pieces and change them and how you as a faculty member staff might be able to quickly create artifacts and transform text. Once you create the table in Bing, what's nice about it is if you click on export, you can export it into Word and it will appear right in Teams for you or if you go up to the top, you can edit it directly into Excel. So again, the tables can be quite interesting within the tool. So that's kind of one way of using Bing. The second way that I wanted to mention just moving down my slides here is actually before I jump into that is thinking about prompting tips when we're creating text output. And I find it quite fascinating with Bing is it seems to require more thoughtful prompting so it doesn't go sideways on you. And that's kind of a good and a bad. It's a good way to hone your prompting compared to chat GPT-4 where the output tends to become a little bit more specific regardless of the prompting. So a couple of prompting tips and these are from OpenAI. So again, if you're relatively new to prompting, prompting can make all the difference. The recently a piece of GenAI art won a state fair in the United States. That piece of art took more than a week to prompt the model to get the art. So sometimes when I talk to folks and they say, well, this output's not very good. Often it's not a LLM issue. It's a prompting issue. Prompting takes time and it takes thoughtfulness. So number one is make sure that when you are prompting you're writing very clear instructions. We're not talking to a human. We're still talking to a computer although sometimes it doesn't feel like that. Provide reference text. So if you wanna create some learning objectives provide some learning objectives in the style that you want into the tool and get them to write learning objectives in a similar way. If you have a writing style that you want take an excerpt of your writing, put that into the tool, get it to create that. Number three, and I'm sure most of you've experienced this rather than trying to create grand prompts which if you are working at the API level you do need to do very large prompts is consider splitting it up and kind of going through smaller prompts and refining. Number three, and this one's fascinating is give the model time to think. And by giving it time to think what OpenAI suggests is get it to show its work. Ask it to when it works through something to tell you how it came up with the idea with what it's creating or ask it to break down what it's creating into steps. So get it to kind of do a little bit of thinking into why it generated the text and the way it does. Sometimes I'll ask it to do things like, explain why you did that, explain the perspectives that you pulled from, break this into logical steps. What they found in studies in some research is that by doing that not only does it tell you kind of the background of why it's doing what it does it improves the quality of the output. Number five is evaluate and test and it's a constant process of evaluating and testing. So what are some ways that we might use text, textual output in Bing chat? One of them is teaching materials. These tools are very powerful for creating teaching materials. Specifically we can think of creating test questions recently I've been working with instructors who've needed to create makeup exams and they're able to quickly generate makeup questions that they can go through, make sure that they're accurate, make sure that they're valid and create makeup tests with them. Secondly is rubrics for evaluation. Third is case studies or scenarios. These tools again are very powerful for scaling up multiple case studies, multiple scenarios and fourth is lesson plans and I'm gonna demo a couple for you now and I want you to take a look at kind of the prompts that I'm using, the complexities of these prompts as well as the output. So you're an expert teacher in cognitive psychology. What I did there is I applied a persona prompt. Another piece of research on these tools is that when using personas, it tends to draw on higher quality data and improves the output. So you're an expert teacher in cognitive psychology, skilled in creating intriguing and thought provoking questions for upper level undergraduate students. So notice all the specificity in this. I'm not gonna read through the whole thing but I'm kind of laying out a scenario for the model. Could you please help create comparable questions for the second quiz? As a model here, oh, sorry, let me change that. I'm just gonna remove as a model. Here is one of the original questions. Describe each component of the app can shift from model and explain how information flows from one part to the other. So I gave it an example of the type of question that I want to create. So I'm gonna head over to Bing Copilot now. I'm gonna create a new topic and paste this in. And let's see the result of that. Great, so Bing is going to create some questions. So it created a single comparative questions but if you take a look, it's pretty similar. So I'm just gonna say create five more that are more general and see if it gets it. And spelling doesn't seem to count, perfect. So now it's giving me some more questions and then I could build on this a little bit if I wanted to. So create an answer key and this is where kind of the shorter prompts come in so I can have different answer keys for these questions as well. You'll notice at the bottom here, it says three out of 30 responses. What that means is it has a working memory and I can make 30 different prompts now where it will remember them to an extent. Again, that's because I'm logged in to the tool. So once I've created those, I can copy it or I can export into text word or into PDF directly. Let me just try word and it should export it and open it right in Teams now. So you'll see now it's opened I think OneDrive and I can edit those within OneDrive. So let's head back, we'll create a new topic. I'm just gonna do a couple more demos here. I wanna do one more demo and that's a rubric when we're thinking about teaching materials and Bing again is quite good at creating rubrics. So Act is a communicating science instructor. Again, I give it that persona and then I give it how it should create the rubric and you can output the rubrics in different formats. So if you have a specific tool that you can actually upload into you may create a rubric in that format so you can add it into the learning technology. And so let's see the rubric that comes out wonderful. And so our rubrics coming out again I can open that up directly into Excel. I can tweak the rubrics, I can make changes to it. Again, a reminder that you really need a human in the middle because you can't be sure of the accuracies that these tools produce. I won't finish that off, I'm just gonna keep moving and I wanna think a little bit about so we've talked about Bing text output in teaching and I wanna think about it in terms of teaching and learning now. So what could it mean to assign our students something in terms of text output? And one way that faculty have been doing this is by giving students the opportunity to analyze the output of these tools and compare it to their own work. This can be quite helpful because the outputs of large language models tend to have hallucinations. Meaning inaccuracies within them, students can evaluate it for that. They tend also to have built-in perspectives, built-in biases that students can analyze as well. So I wanted to show you a colleague that I actually just met with today from education, Dr. Kari Grain. And she showed me this example of how she's created kind of an evaluation prompt for her class. So she gave this as an option for a discussion board in an online class. She talks about using chat GPT to analyze contemporary changes in adult education. So she gives them the prompt, enter the following prompt. In a hundred words or less and based on research and literature in adult education, tell me some specific ways I can learn best as an adult. Keep in mind I am someone who loves to learn by memorization, et cetera. And they fill in the blank for that prompt. So once they do that, what she had them do is to copy and paste the prompt into their discussion and then evaluate it. Do you think the information provided is useful or accurate? What might go wrong in this type of prompt activity if you needed research-based information? And she said that the results of this were fascinating. The students were really engaged with it. Some students found that the results were really lacking and not connected to the research. Other students found it was quite connected. So that's evaluating. The second way is using Gen AI to generate and brainstorm and build on ideas. So I think the idea here is it's almost augmenting what they're able to create. So for example, having them brainstorm multiple business plans before creating their own or having them, if they're writing an essay, use this as a partner to kind of generate ideas to augment their teaching and learning. And then third is the idea of using Bing Co-Pilot for independent study and learning. So have them enter prompts that allow the tool to tutor them and ask them questions at about a particular subject or give them a prompt that allows them to quiz themselves in a certain area or play a game in a certain area. So lots of different ways that students can, that you can use this in terms of teaching and learning. So what I'm gonna do now is hand it over to you and I'd like to give you, we don't have too much time, but let's say four minutes. Four minutes is a great number for me. For my seven year old, four minutes means anything from 30 minutes to a minute. So I'm gonna say four minutes. And what I'd like you to do now is to open Bing Co-Pilot, create a rubric or a quiz question and at the same time, think about how you might integrate this tool into your teaching and learning practice. So I put this on the worksheet. So use Bing Co-Pilot to create a rubric or a quiz question and then share how you could integrate Bing Co-Pilot into an assignment, okay? So I'm gonna start my four minute timer here and give it a try. Again, open Bing Co-Pilot and what I'm interested in when we do a debrief. I wanna hear kind of the quality of your output. How was it? What was missing? What was there? Go ahead. I'm gonna jump in now and talk a little bit about image prompting, which is another ability that Bing has. So it calls from my understanding on Dolly 3, which is an image creator. Again, I don't find this as powerful as GPT-4, but it does allow us to create images. You do need to be logged in to create images within Bing. So if you do have students who aren't logging into it, they won't be able to create images. So I put this on the worksheet, but I'm forgetting the resource I got this from. But when we're thinking about generating an image, a good way to do it is to describe the content of your image. So a 3D rendering of, describe the subject and owl, describe the relevant details, a red owl with bright blue eyes, describe the form and style, and now we can get into lots of different styles. So in this case, abstract expressionism and just define the composition with a resolution of X. So lots of specificity in there and we can start prompting it for images. So why would we wanna start prompting it for images and kind of how are we thinking about using this? One way that I'm using image generators is image creation and teaching materials with a caveat. This slide you're looking at now, on the left hand side, you see my background. I've generated all my PowerPoint backgrounds with image generators and I find it's great at creating decorative images. This image on the right was a classroom that I asked it to create. I was thinking if I was doing teacher education, maybe I would get it to create a classroom. Well, the image creators so far make a lot of little detailed errors. Like I'm really not sure what's up with all these plants in the classroom. Maybe that's okay and the paintings are kind of way up the wall there. So I wouldn't be using it for specific diagrams at this point, but for decorative images, it can be a fact. Next is thinking about image creating and teaching and learning and Dr. Patrick Pennyfather at UBC and theater and films uses Gen AI to get his students to think about the biases built into these tools. And this is an example that I did before this session as I asked it to create an average Canadian family. And this is with Bing Chat and this is what it came out with. So in the classroom, I could think of having my students generate something like this and then analyze the image for the bias within the image. And this is a very simple example. You could get quite a bit more complex than that. So let me just give one of these a try. Let's try the average Canadian family and think, see if it can do anything different here. One moment. So I'm gonna go into co-pilot and you can get it to prompt right within the chat or there's another tab that only does images. Create an image of an average Canadian family and let's see what we get. And again, as long as your students are signed in, they're gonna be able to do image creation from within the tool as well. I have found sometimes image creation just completely fails on me. I'm hoping this isn't the one. So let's see what we get here. I had it the other day actually try to, it asked for credits to do this process faster, which I found a little bit disconcerting. Let's come back to that. It's kind of like one of those baking shows that can go on behind the scenes. And what I would like you to do is, let's just see if it's there yet. So while we're waiting, what I'd like you to do is to open being co-pilot again, try to create an image that you could use on a PowerPoint, a decorative image, or example, an icon or a background and try to create an image where you're going to get bias that might be a place of learning that you can use with your students. So I'll give you four minutes again and still waiting for that image to come up. Hopefully your images come out a little bit faster than mine. So go ahead four minutes and then we'll kind of share back on the process. Ah, okay. Well, you know, it's not generating for me. So I'm going to jump on to chat GPT and see chat GPT tends to be less bias. So I'm going to do the same one right now. Let me, my sharing screen with you on this. Yeah, create an image of a Canadian professor and let's see, I've found that chat GPT also uses Dolly 3 but I haven't found it as nearly as biased as being or mid-journey. Mid-journey can be quite biased. I'm just going to go back. So my images are still generating here. I don't know if they're ever going to come through and it looks like other folks are having the same issue. So we'll see I've, chat GPT for has done this far less. And just to show you the interface I'm using, this is chat GPT for I pay $20 a month. So yeah, yeah, there you go. So there's a Canadian professor for you. So similar level of bias when we're using that tool. And can I, did anyone get their image to work there? So I see some head shakes. So we'll just keep moving on. I want to, so Fabio last, is there a specific place you can prompt Bing Tip for images? There's a couple different places. Generally, I just do it from the chat as I've just showed you there is, let me just look here. There is an image creator. I'm just trying to find that. Maybe it's under my Bing here. Let me take a look. Hmm, not finding that right now. I might need to be logged into Edge to get at that, but there is an image creator where you can do that as well. Let me double check here. No, the next question in the chat is someone says, how do we navigate, Josh says, how do we navigate publication copyright if we want to direct it towards research? You mentioned earlier that a journal like JAMA might grant access, but other journals have limited access. Would this then require uploading, for example, a PDF of the journal article? I mean, I think uploading a PDF of a journal article is really problematic copyright-wise, and you would need to access the journal article on your own and write it in. Even what we're doing is getting it to scan the journal article and write an annotation. I'm really, I imagine that's okay with copyright, but it is a very challenging area to think about. And kind of as you bring that up, I think with copyright, image generation is quite challenging. I recently did a workshop with a number of teacher candidates and they were, the art teachers were very concerned about generating images and the fact that online images have been scraped and this replicates the style. So I think it's worth bringing that up with students if you are using an image generator. So I'm gonna move on and get to the last area now. And that is visual search or computer vision with Bing AI. You can also do this with ChatGPT4. If you're interested, I recently did a course through Vanderbilt University on computer vision. It is so interesting. So computer vision is the ability to take a picture or upload a picture and have the tool analyze the particular picture. So you can use your webcam, you can upload an image or you can use your phone with the Bing app and take a picture of something. So how might you use this? One of them is to create alternative texts for images. So I uploaded this image of a bacterium into Bing and I asked it to create alternative texts for it. And this is the alternative texts that I got from Bing so that I would be able to use this in a more accessible presentation. What's interesting about it is I ran the same search using ChatGPT4 directly and it was more specific. So actually the original image was of a, sorry, my pronunciation, a prokaryotic cell in Bing it said it was a bacterium. In ChatGPT4 it was able to identify it as a prokaryotic cell. So in your own teaching a couple of ways that you might consider using visual search. One is to create alt text and I'll demonstrate this in a second. The second thing is to create use in your room, say use flip charts or whiteboard, take a picture of that and you can organize the data on the whiteboard, you can organize the data on the picture. For your students, I think it's a little bit harder. One way that they've suggested at Michigan is for students to use it as a way to help them analyze charts. So before I, let me demonstrate it so we're clear on what this tool is. So I'm in Bing Chat now. I'm gonna click on this add an image link here and you'll see I can take a photo or I can upload directly from a device and let me grab an image here. So what I'm gonna do is say describe this image in detail, provide me with meaningful alternative text. Also tell me where it is located and we'll see how well it does. So this is a picture of paddle boarding on the ocean. Bing does blur faces for privacy. So you can't, if there is a face it won't be able to describe that and let's see what we get from it. And hopefully it does install and I have to move her to chat GPT-4 again. While we're doing that, just while we're waiting I'll do the cooking thing again. I'm gonna upload the same image and give the same prompt to chat GPT-4, this image and create alternative text, create a CSV showing me every object visible in the image. I'll give that a try. We're gonna go back to Bing now and here we go. So the image you've shared creates has captured a serene in picture as seen of a lake or a body of water that appears to be sunrise or sunset. The viewer seems to be on a boat. It's not a boat, it's a paddle board. The edge of the boat's visible, the water is calm, the mountain is silhouetted, the sky. As for location, it could be somewhere a beautiful lake in terms of alternative text. Here's a suggestion, sunrise over a calm lake viewed from a boat with a forest silhouetted with glowing sky. So when we're creating alternative text we can think about using this tool. As said, I find chat GPT-4 a lot better. This image shows a serene body of water with the sun setting behind a hill. I also added chat GPT-4 to create a CSV of all the objects within the picture. And so this is what I'm learning in this course now is you can use it for doing things like inventories or take a picture of a white board that's full of text. So here's all the objects in the course now. Let's just show you that. So this is what chat GPT-4 created for me. It's trees where they're located and it automatically created that. So I think computer vision is, it's very interesting and I think it's very powerful to think about what could computer vision mean in your teaching? How could it be used? A couple of examples that I thought of is taking a picture of an online course and asking it, how could this, how could the design of this online course be better? Or taking a picture of a classroom and saying, how could I set this up for interaction more effectively? Or another example is again, taking a white board and saying, take all the content on the white board, create themes for it, divide the themes into a CSV file that I can use later or having your students do that so they can learn more or understand your lesson better. So it kind of brings me to the end of the slide.