 As Kevin mentioned earlier, if you have a question that you see in the chat, give it a thumbs up. You say, oh, I like that question. Give it a thumbs up or hard or some kind of emoji so we know we need to bump that question to the top. Otherwise, please use the Q&A to put your questions in the Q&A section. This is being recorded. So you're going to get the slides and the video replay within 48 hours. And now I would like to turn it over to our speakers from Roundtable, Josh and Amy, take it away. Thank you so much, Rita. All right. So hello, everybody, and welcome to our artificial intelligence as the experts. I always cringe a little at the word expert. I don't really consider myself an expert in anything, but always learning and learning with everybody. That's just part of my job. But that said, I have been doing a ton of work in AI, using AI, teaching AI, learning AI over the last couple of years and very happy to share what things I can today and also learn from everybody here, including my wonderful colleague, Kim Snyder. So, Kim. Hi. I'm Kim Snyder, VP of Data Strategy at Roundtable Technology. And my other hat that I wear, just because we're at the ethics and principles and governance session today, is data and data privacy and data governance. And I've been doing a bunch of that with nonprofits for actually Josh and I have been working together with nonprofits for about 30 years. So very, and also we've been very, very focused on AI and some of these implications. As I was telling the group before the session, I love the questions that have been coming in. So looking forward. Yeah. Initially, AI has kind of been more or less in the news for about 30 years. I'm wondering if anyone, it's like 27 years ago, was probably the first big AI story. I wonder if anyone knows what that is or would want to put it in. But the first time AI became meaningfully better than humans at something that humans cared about. That's my, that's your hint. You know, as I guess, about 27 years ago. All right. Just let that sit. We'll see if anyone in the chat grabs that. All right. So Kim, I'll let you take it from here. I'll take this off. So we're going to give you, we're going to give you, this is our welcome and introduction. We're going to talk briefly. We've got a couple of slides here. They're going to introduce themes that I think we're going to be coming back to. Over and over in this discussion. We're actually starting, going to reverse this a little. We're going to actually start with your questions. So start, start mulling them over. And then we're going to save time for submitted questions at the end. So let's go forth. All right. And everybody correct folks have said deep blue beating Casper off in chess. That is correct. And then 27 years later, Microsoft actually just overtook Apple as the most valuable company in the world. And another trivia question people can put in the chat is, well, what country's gross domestic product is closest to Microsoft's? Right now, right now. Turn it up a little. So I think it's going to be different. All right. Heading into our session. We need to mute Roy. We're getting feedback from Roy's. There we go. Go ahead, Kim. Thank you. All righty. So we are going to hop right into ethics, principles and governance. And this is an important session to have. And so on the next slide. I like to start these by level setting a little bit. First of all, AI means a lot of things. OK, so we can be talking sophisticated systems that people are building or people are using and kind of designed. We can be talking about software that has AI included in it. We can also be talking about the thing that's gotten AI kind of in the headlines a lot lately, which is generative AI. OK, so I just want to acknowledge that there is a continuum around AI. And I also want to say that there is the journey of adoption for a nonprofit is also a continuum. So I just wanted to put that out there. And I'm not sure where everyone who's here is on the journey. But suffice it to say, even at the very beginning stages of initial exploration and experimentation, which I see a lot of organizations are now. I believe that the principles we're talking about today are key to what you need to be thinking about, even at the very beginning. They don't just apply only if you're deep into some kind of machine learning. OK, so I just wanted to put that out there. So when we talk about, I think we can go forward. And I believe everyone will get these slides and. OK, I just want to make sure everyone can hear me OK. I saw something about the microphone. All right, so when you think about ethics principles, right? These are these kind of baseline principles around fairness, transparency, maintaining privacy, and also just general ability as an organization. These are the kinds of principles that you'll see in something like the A.I. Bill of Rights that came out. I believe that was in twenty twenty two early in the year. And so we're seeing so these when we think about A.I. ethics, these are some of the big worries, right? When we apply this to generative A.I., I suspect that that's where most of the people who are attending are, right? Or earlier in the continuum, a little bit, the adoption continuum and trying out maybe some of these tools or wondering if your organization should try out some of these tools. So when you think about how do we, you know, how do these kinds of principles apply for thinking about generative A.I., right? They've come up around people ask a lot of questions around authenticity, copyright, the potential to generate at scale, kind of misinformation and manipulated others with like deep fake videos, things like that. There's a lot in the news keeps kind of coming up and down. It's a very unsettled area is around copyright and intellectual property. And also one of the things to think about and people have asked questions about all of these things. But one of the things to think about is like, how will this affect our workplace? How are we going to deal with this new thing? What are we going to do? So I think we moving on. I'm just going to on a high level talk about what some of the general risk categories are, right? So when people, a lot of people heard like it's been in the headlines and headlines like drama. So we've heard a lot and even some of the A.I. community people like a lot of dramas, to be honest. So thinking about the risks and I'm thinking here, I'm focusing more on generative A.I., but think about the risks for a nonprofit that's starting to put this to use. One of the key risks and I'd say the biggest and thing that will happen the most often and you need to be mindful of. And luckily, there's a way to stop this from getting in your way. But what's known as confabulation, right? And that's basically not saying inaccurate things, right? It makes things up. Important thing to remember about A.I. and we don't have time to get into this whole thing now. But I'll give it to you in a couple of sentences. A.I., even though intelligence is in the name, there isn't like intelligence in there. There's no like thing that like reasons, right? This is a probabilistic tool, right? So it's statistics guiding this. So sometimes it generates stuff that's incorrect, but sounds really correct. OK, another issue. And I know this is on the minds of a lot of people is around bias, right? And a lot of these generative systems that we've used today are built on today's data. Today's data has a lot of bias. OK, the people who made the tools are themselves from a certain demographic group. Dr. Joy Boulomouini, who wrote Unmasking A.I., has used as a term pale male data because a lot of the people creating these tools, especially in the early days, have been guys, right, like guys. So, yes, there is bias. There's also issues around privacy. Here we're thinking more from the generative A.I. about what you're putting into your prompts, right? What what how much data do you want? You know, are you putting in there? And you don't want to use it for personal information unless you know that your data is not being used for training and it's kind of staying within your enterprise. Some of the newer tools such as Microsoft, Copilot and things like that that organizations are getting have that capability of kind of keeping within the walls of your organization. And finally, copyright. And and for this, I would say a general rule is not to ask it to write me something in the style of and then some author's name or as well don't asking for an image like in the style of a specific artist or even a New York cartoon, for example, is you can start to tread on the line of copyright there. So I think that's it for. I think I turn it over now to you, Josh. And the team is, yeah, they're putting in some great resources around Microsoft. And this is sponsored by Microsoft. So stuff around copyrights because they have specific policies around that. I can't hear you turn off my microphone because I was making noise. I think the answer to the trivia question in case anyone hasn't figured out yet is is Microsoft's current market capitalization. See if I can do a switch is about three trillion as of today. And France, it's actually just overtaken France. Microsoft has overtaken the gross domestic product of France in terms of its value and market capitalization. So there you go. So Microsoft is a France in terms of economic output. Anyway, so that's the answer. I'll see if I can come up with some more trivia questions. So around guiding principles of using AI within the nonprofit space is, first of all, I want to say a couple of things that are on the slide, which is I think there's a really significant risk. And I would even argue an ethical obligation for nonprofits to think about how they will use AI and how they will teach their staff to use AI. One of the reasons why Microsoft has overtaken Apple is because so many people believe they have a significant head start on AI investment. And it is believed that by many people that AI will play a very significant role in the workplace and economy. And so if we are not teaching our staff and our personnel how to use these tools and use them safely and responsibly and ethically and also leveraging these tools to fulfill our missions, to further our missions and provide better, more different services to the kinds of communities that we're serving, then I actually think you're missing a tremendous opportunity. And of course, with any opportunity comes risk. And that's largely what we're focusing on today. But I really want to underscore the idea that I do, I personally think there's an ethical obligation to use these tools both to benefit from them and also to be a part of this conversation around ethical and safe use. So get out that sub box. Guiding principles. First of all, understand that your staff are probably using AI tools already. Like most organizations, if you haven't introduced any AI policy, haven't introduced any AI training. It's not like people don't have access to these tools for free or for very low cost. And many of your staff will probably figure out, hey, I can use this like write emails, write documents, create reports, maybe even analyze data, create images. And if you're not providing them with some policy about, hey, this is the right way to use it. This is not a good way to use it. This is the tool we prefer you use. This is the tool then you're courting tremendous amounts of risk as well as potential harms to both your staff and to the communities that you may serve and to others. So make sure that you're thinking about how AI can align with your organization, where it fits in, right? Understand that people are concerned that this may impact their jobs, that they may lose their jobs, address those concerns, think about how that will play out at your organization and provide guidelines on how we want you to use it and how we don't want you to use it and where you can go wrong. Kim, is there anything you want to add to that? Just that we do have a template we're happy to share with organizations. Got to start somewhere. You don't need to have the AI acceptable use policy, the end all AI acceptable use policies. In fact, you will be continuously revisiting it because it keeps changing. So let's look at use cases, all right? So very simple risk benefit matrix that we're kind of looking at here. So on the vertical axis, we have benefit. How useful is this? On the horizontal axis, we have risk. So let's look at something like I need to create images for blog posts on a regular basis and these images to be original. I need them to be kind of catchy, you know, do things. If I can create these images using a generative AI tool like Dolly or Mid Journey and with a minimal amount of training, make sure that we're creating images that are appropriate, not the goal and not copyright violations. I'm not asking for images that look like Banksy paintings. Right? I'm doing that. This is a relatively low risk, potentially high benefit use of AI, right? Down in the lower right where we add a lot of risk, OK, is let's say a chatbot that we put on our website for at risk youth, because that's who we serve to get access to, you know, health questions and safety questions and things like that. You know, our volunteer line is overload. Whoops, we're getting some noise from somewhere. Our volunteer line is overloaded with calls. We can't answer them all. So let's put a chatbot up on the site. That has a tremendous amount of risk in terms of the kinds of outcomes that could happen. We could point you to lots of different stories around this. So focus at first, right, on the areas where you have high benefit, low risk things that you can leverage AI to do, right? When you start exposing large amounts of sensitive information of your organization to the AI, right, in order to get results that you want, you're increasing risk, potentially increasing benefit, right, but increasing risk. So that's when you're further down the maturity continuum. OK, all right. I think that's that's kind of all we had. It's a very, it was intended to be a very short presentation right around 15 minutes. We managed to hit just a minute over, which really isn't bad. All right, and there's a guide to AI policy usage and we have the acceptable use policy link in there. And at this point, I think we are ready for questions and we're going to before we go to our submitted questions. I think we wanted to go to live questions. Yep. Currently, there are any live questions yet. So you can go to the submitted questions and we'll catch you up. There's any submitted live questions. OK. Alrighty, I think you can raise their hand to if they just want to ask one and come up, we certainly welcome that. Yeah, I have the pre submitted questions. If you want me to start with just one of those and if at any point somebody again, you can take yourself off a mute and jump in. First question I have here is how can we know if a large language model was trained ethically? Also, how can we find out what the environmental impacts of a particular generative tool are? Can you want to take that first? You want me to take a crack at it? OK, well, I will I will take a crack at it first. So how do we know if it's created ethically? So one of the things we're separating here is so for the most part, the large language models are going to be created by another company, right? Like Microsoft or open AI. So I think it's important to look at the policies. And I believe we've got some from Microsoft around ethical AI development and they partner with open AI. So I'd say that looking at the materials that are provided by these by the companies that you're seeking to work with are going to be the key thing. The environmental usage that that's that's a hard call. Yes, it does consume a lot of resources. I don't know that I think that there's going to be more attention to that. And again, I think this comes down to a specific type of application or actually reaching out to the vendors themselves and looking at their policies. This is an area, though, that people are concerned about. I know Josh, if you would add something to that, we can't hear you. Yeah, I keep turning off because I'm making noise. One of the first things that he was some of these questions, especially when I'm not particularly confident myself, is go to a large language model and ask the question. And that gives me a starting place to kind of think about the issues. Fundamentally, I think under that question around trained ethically first, we have to find what that ethically means to you. Does that mean I wasn't trained on copyrighted data without permission of copyright holders was trained thoughtfully in the sense of looking at the data sets and ensuring that they were representative of wide groups of people. We didn't steal data or scrape data that we weren't given consent to. Like, what is the ethically mean to you? That part is probably the easier part is determining what the ethical bar is for you on what an ethically trained system would constitute. I think the hard part is going to be then mapping that and finding out with any confidence whether systems in the world reach that ethical bar that you've then set. I think you can try and I think that there will increasingly be more transparency because consumers and companies will demand it. But right now, I think you have a very uphill climb trying to establish exactly how because it's very closely held and generally opaque as to how these models were trained and they're only getting more so because of all the legal action that's being taken. So that's going to be a challenging one. Yeah, I'd say stay tuned to there's going to be a lot of copyright lawyers doing a lot of work over these next the next decade because this is a very new and unsettled area. And I feel like there was a second part to that question, Kevin. What was the or a second question that you asked there? Environmental impact, the environmental impact or environmental impact. Most of them it's pretty significant because they're they're using up tremendous amounts of energy. And so if you that's mostly what it is to the best of my knowledge, there aren't other significant environmental impacts other than they are massive energy consumers right now. That will probably get better as we continue to improve computing power and also increase the ability of these tools to perform at a very high level with less data and compute. But right now they are massive energy hogs. Thanks. There's a question that did come in through the chat and I wanted to swing back to that. Robin asked, what AI tools are you recommending? I'll start off with that. So all we want to acknowledge the sponsors, right? So Microsoft Microsoft actually was the group that put kind of co-pilot at the time it was Bing. Josh and I taught a course all about the different uses around Bing and Bing combines GPT-4, which is the highest level, you know, large language model, the open AI because of the partnership and also Dali, Dali 3. So you can make images and text and write documents and look things up and it's with the web. So that is was one of the real benefits to that. There are other tools, GPT, one of them I would recommend also looking at GPT plus, right, because you get some extra features there. But if you've not tried any tool, we definitely try some of these some of the free tools and any other advice on that, Josh? Well, I think one of the things I definitely want to share is that one of the things about Microsoft on this. And let me get my right image up. There we go. Is that if you're if your organization is using Microsoft 365 and you serve a seat in the co-pilot, your everyday companion on my screen, I get the right right. You'll see this big protected here. And if you follow this through what it is in essence telling you is that if you're using a. It's the free version of co-pilot, but it's the free version that's included with a paid Microsoft 365 subscription, OK, which is E3 or E5 or business standard or business premium. And if you have one of those, then your data is protected and you can not only use co-pilot for free, but you can share information with it. You can. You don't have to worry because if it was already in one driver SharePoint, you're not exposing it to anything further by putting it in Chat if it's protected. You do have to make sure that you do have that big green and protected, and that's a big plus on the Microsoft side, but the other tools just are not providing you. But I also encourage people to try tools. So try them out. I know I use different tools for different types of needs, right, for different types of use cases. And again, some of that's about finding your own style that actually I don't want to jump into all of that because we actually have another ask the experts on that in a little bit at the end of the month. So next. The nice thing about the tools is that while they're all different and they all have like perplexity, which Deb is and Deb does a lot of AI. If that's the devil thinking of Deb knows a lot about these they add tools, uses a lot of different ones. So she's saying she likes perplexity. That's definitely meaningful. You know, they all work in similar ways. They're all going to respond similarly to good prompts and respond similarly, you know, poorly to poor prompts. So by building skills in one in terms of the claims of things that these tools are good at right now and how to get the best outputs from them, those skills will typically carry across from platform to platform. The follow up on that. This is would be a really good question. So I'm going to kind of bump this one up a little bit is how do I know that the answer or text generated by the AI is factually correct? You don't have to verify that's a easy one to answer. Yeah, that is one of the big takeaways that I hope people have from here, not so much to be afraid to use these tools because they may say something incorrect, you know, like that rather unfortunate lawyer back that summer. But that there is an onus on us as using as using of these tools responsible use to verify the things that come out of it. So I tend to not ask it things that I don't know already or don't have any way of verification. So there's something to share it widely and also it speaks to the importance of reviewing the stuff that comes out, which can be part of your AI policy. That's a great point. It almost it's like it's the beginning of the process, not the beginning and end of it, right? And that's kind of what it feels like when you're using an AI tool. I want to jump back into chat because another good question came in from Aaron. If an AI system causes harm to someone, what are your thoughts on mechanisms for redress? Are there certain policies you recommend to help these individuals? So in the EU, which tends to be out in front on these kinds of issues around data privacy and around transparency, I forget what they call it, but they have a right for human review has been established in the EU around AI systems. So if I'm applying for a mortgage, applying for a loan, applying for a job and the decision as to whether I'm granted an interview, given a mortgage, given a loan was algorithmically determined through an AI system, then I have a right to ask for a human review of that process. I think that's a pretty good standard that an organization could establish if they wanted to redress a particular harm that was done through an AI system, is basically give anyone who thinks they were unfairly treated by a system that you're using has an option for a human review. So that's my kind of quick answer. I'm not kidding with you. Interesting, the person who asked that is more specific about the kinds of harm they're thinking of there. I mean, some of the harms you want to avoid, right? So like the harm of the chat pot that Josh gave is an example in the high-risk category. So I think more like as an organization that's wanting to make use of these kinds of AI tools because it can reach a larger audience, 24 seven, et cetera. But if those tools haven't been adequately tested and rigorously tested on an ongoing basis, it has a distinct possibility of spewing out information that's not correct. So you want to do your due diligence, right, and testing. And again, think of that risk benefit. Is the risk worth the benefit, right? Those kinds of questions. But yeah, I think, I mean, in terms of AI harms, and just to note what Josh was saying about the EU, we are starting to see, albeit in groups in tiny drabs, in privacy legislation that's coming up to the fore in various US states, the inclusion of that type of right to kind of for human review around AI and decision making. So that's, but again, one of the reasons why Joshua and I have felt so strongly and have continued to be working with nonprofits, to help nonprofit organizations to use these tools and to feel comfortable using them and speaking about them is because we need more human-centered people asking these sorts of questions. The more people that are asking these questions about the tools that you are purchasing or using, the better it is for everybody. So there's not- I'm pacing in the chat. Go ahead, Kevin, sorry. Well, I was just gonna say, this is a statement, but I think it could be reworded as a question, but go ahead, Josh. I was just gonna say I dropped something in the chat around like another way to mitigate harm, which is a bit lazy, but required is just putting disclaimers around AI systems that people may interact with. So if you're gonna put a chap on your site, something that says, hey, this might display an accurate info, please double check it, its responses. And then absolutely gonna click further on that. And most systems are now adding that. Right, so to that, like what Christian had stated was around confidence level. And I had tossed into the chat, like and you had mentioned a disclaimer and some models will specifically state through which point they're referencing, but is there anything else that either of you can think of? Is it relates to confidence? Like maybe querying back to the model itself you've been asking with what rate of confidence? Is this answer been generated or a way to pull out maybe some additional information like even around the sources that it's referencing? The challenge around that with the way and a person who's built large language models and understands this better than me would probably have a better take on it. One of the challenges is that everything you ask it, remember is another probabilistic response. So what these large language models are doing is probabilistically looking at words and saying what is the most likely next word based on the context of this conversation? Not what is the most accurate word in the context of the conversation or what is a factual word in this conversation? So by asking it, you can improve sometimes by asking for confidence levels, asking to cite sources, saying make sure these sources really exist. But in my experience, they will still confabulate on you. I do not have an answer other than you have to manually just validate things, especially if they're really critical facts or you don't know that they're true because asking the large language model is creating this circular feedback loop that is not going to help you. I think here's the thing I want to kind of point out. First of all, because a lot of the questions I'm getting right now are this question is more about like search and how do I know the search result I'm getting is correct, right? AI systems, large language models are not searched. I realize that co-pilot and Bing are actually allow you to do both. But if we can kind of go more into the realm of the large language model for just right now, they are ideal for things. And again, this speaks to what are the use cases that we're putting it to work in, right? Please summarize this publicly available 75-page report and pull highlights from it, right? That I can verify, I can actually look and cite page numbers. So that's a type of task. Or please correct the grammatical errors in this five-page article I've written, okay? So those are kinds of use cases where it's the fact that it's a probabilistic model, so it's doing math really, is not going to get in my way too much, okay? Or I've definitely a way to kind of counteract it. Again, think low risk, high benefit, right? Turn this five paragraphs into three bullet point slides is a great use of AI and help save me a lot of time. And there are countless use cases like that. And I think that's what the task for organizations to do as they start to think about adopting AI is what are the things we can put it to use doing, right? That's part of the journey. So Shaisa had asked about providing a list of influential papers or articles on AI that could be recommended for the group. This is not academia, it's not my area. So I'll leave it to the two of you. If you have anything you could add to that. I'm gonna plop a couple in that just leap to mind. There'll be two that are, I would put it as absolute kind of muster is one actually a Microsoft report. So let me send those. So, and Deb, if you're the Deb who I think you are, if you wanna drop in your newsletters to follow the substack article that you wrote on that, I think that might be a good resource for Shaisa as well if I'm not pronouncing your name well. And so the two that I dropped in, one is not exactly a research paper, but yeah, Deb dropped in that same newsletter. So the reshaping the tree rebuilding organizations for AI I think is a must read for nonprofits around the kind of current state of AI and how things are changing. And then the AI and productivity report first edition for Microsoft. I think some of the early data around how AI is impacting knowledge workers which are probably most of us is pretty incredible. And if it holds up is, I think underscores how potentially transformative this can be for the workplace in ways that are really largely positive that are people are getting better performance with less stress, which is kind of the magic ingredient. And it also appears to level up people to a consistently high level. So if you have like DCB A performers, your A performers move to A plus performers but all the other performers become B plus, A minus performers, which is it baffles up or is replicated. That's pretty incredible. Kim's dropping a bunch of other good ones in there. I'm sure other folks in the will drop plenty in there for you, Shadista. But that's, those are my two that I would for sure start with. So Janice had asked about giving more examples of possible uses, use cases of AI in the workplace, I'm assuming is what Janice is implying. Sure, Kim, you wanna start on that one? You want me to? I mean, think about tasks again, think it through risk benefit. So the great use cases that we have all the time are create an RFP or think about tasks that you have to do that are like, where it'd be really helpful to have something else. If you had a magic intern and often we kind of think about AI tools as a magical intern that never gets tired and always will do stuff. Can you please write me a first draft of an RFP for X, Y, and Z thing that I need? Can you please put together these two paragraphs? So kind of copy editing types of writing, job descriptions, it does a great job with. I mean, Josh, the list is kind of endless and it really depends on the types of needs that you have but those are some of the things that come immediately to mind for me. Tasks that take people a long time but a good first draft is really helpful. I mean, if we want, I could run through the kind of some of the doofest slides that we did earlier this week. Kim that kind of just, so Janice, I can run through a bunch of use cases all at once and maybe I'll do that in a minute but I wanna say something fundamental first. What I've come to realize about these tools is that if I, fundamental way to think about use cases is this. If I need to do something, write an email, write a report, analyze some data, clean up a spreadsheet, create a couple of images, plan a couple of images, write a blog post, blah, blah, a task, write a report, create some data visualizations. As you learn these tools and understand their capabilities and when you're using them regularly and understand how these capabilities are changing then as you come upon doing something, I can ask, is it worth making a thing to do this thing rather than doing the thing myself? So I can create, for example, a custom GPT which is like a little chatbot to do data analysis for me, to create data visualizations for me, to write an RFPs for me. So if I need to write an RFP, I could take the next two hours to try to write an RFP or I can take the first 30 minutes of those next two hours and try to create a prompt or create a little GPT that writes RFPs in the way that I want and then have it do that. And instead of at two hours, I have an RFP draft that I've spent a bunch of mental energy writing. In 45 minutes, I not only have a draft that's probably as good or better than that first draft that took me two hours, but I actually have a repeatable, like a tool that does this for me, that I can improve on. And yes, I still have to go take this draft and make sure it's good and go clean up everything. But what I've found very true for me and Kim, I think would say the same is this data that shows that people are more productive and less stressed is because of that. Because for me, spending the two hours writing an RFP is very taxing. It's cognitively demanding. But creating a little chatbot to create RFPs and then having it create the RFP and then going doing that is both more fun and also gets me to a better product with less of my own energy and time, frankly. So that's the fundamental thing. So let me now go quickly share, I think where you want this one. So let me just walk you through just to be more, this was sort of a presentation that Kim and Deb and I did actually. Pardon Kim? Are you showing the presentation? All right. I'm just going to skim through it. So we can create text obviously of all times. Pardon Kim? I think it's cutting off for some reason. Oh, that's too bad. Let's see. Let me stop. Now we will be having an actual ask the experts on prompt writing specifically. So I encourage people to come back for that one where we'll be walking through some of these things more and talking about how you write prompts. I mean, one of the things that we kind of vision Josh was talking about GPTs, but even if we don't have that, to organizations will start to develop prompt libraries for themselves, ideally, that they can share from between departments, between different employees. Okay, so he's just kind of going through stuff you can do. Yeah, if you want to talk through him Kim or I can, I just was going to go through him all quickly. I mean, yeah, we should. So that's, yeah, we can go through that one. You can skip that one. All right, you can do slides and presentations and now co-pilot is in PowerPoint. So that's another thing you can do. We did make some images. We can do some- I'm not going to play the video, but this is me translated into six different languages speaking what appears to be fluently six different languages. Okay, so. Meeting summaries, which AI is very good. I'm sure everybody sees the little AI note takers that are on your meetings these days. And I think the rest was covered. Yeah, so those were, and then when we got to the end, this was sort of our list of things that we offered to do. So we've been doing these do fasts where we take an hour and just take requests to actually do work for people and we do it in real time using different AI tools. So these are the kinds of things we offer to people. Obviously it's much more than that, Janice, but hopefully that gives you a flavor of it. It's a lot. Okay, so let's get to some of the submitted questions actually. So I'm just taking a look at these. No, we did get a lot of questions about prompting. So some people say my biggest challenge is that I've never really used AI before. What do I do, right? And we were saying earlier, there are a number of free tools, try them out, but in kind of very low risk situations. Start to, I was in another presentation and we talked about dedicating some time like part of like an hour on a Friday every week to try out these tools and get to know your own personal style because you can't, here's the thing about AI and AI tools is that it's not a technology task as much as it's a question writing task. Sometimes AI for me has been a great thought partner or if you will, because it's helped me think through the question I'm asking, right? So go through that process and get to know your own style, right? Before you roll out AI in an organization-wide level, that's when you really wanna think about do we have a policy? Are we gonna put a policy in place and stuff like that? But I think individually to just to get to know, even to do personal things, try it out, so. And one last comment on this is just, I really encourage you to accept that some things won't work and understand that they won't work and that that's just part of learning. And if you go to use an AI, use co-pilot, you use chat to VT, use Bing, you use something else to do something and it doesn't work, right? And that was the first time, and you're like, oh, this AI thing's terrible, right? Understand very clearly, okay, that that was the moment that you were the worst at using AI because you were just starting to use it. And that's the worst AI you'll ever use because every AI from that day forward is gonna be better than the one that you just use. And so I try things all the time that don't work. I spend a half an hour trying to get an AI to do something and then say it's just not gonna happen, just happened yesterday. But so many things do work and some of them do things that I could never have done without AI. A couple of my friends, for example, have asked me, I've got this really, again, when early in the early days learning AI, here's a great use case, ask your friends if they have any gnarly emails that they don't like writing. Like, oh, I have to write to my landlord because our super is smoking in the baby. And you can craft an email for you. You can ask it to change its tone. Please sound friendly, but firm, right? So you can try things like that. So, and sometimes people will wanna know, where can I get trained? Where can I learn all of this? Well, we do have classes in TechSoup is a great place. But I think trying it out, like getting wet and jumping in the pool. Can I tell a quick, can I do a quick story, Kim? I just wanna, so this is the college admissions. So I'll give everybody example as well. So my wife, my son's 16, she's, you know, furiously trying to figure out how to help him get into college. This is an example of use. And while I'm making breakfast for the kids, like seven o'clock yesterday morning, she says, I'd like your help doing something with AI. I'd like a slide deck to kind of explain these two books that I've been reading to my son, to our son about college. And I said, okay, I've got a quick breakfast, but let me talk you through it. So I put her in front of, you know, my co-pilot account. I talk her through a prompt. We use that prompt to create an outline for a slide deck. We drop that outline into co-pilot to have it generate a presentation. And let's see if I have that up. Hang on, I have to get the slide deck up, of course. Why do I not seeing this thing? Hang on, bear with me a moment. Let me try one more time here. Let's see if it'll allow me to share this. All right, I just have to do the entire screen. Okay, all right. And within about 10 minutes, we had this slide deck based on these two books. And I never even touched the computer. I just talked her through doing all this while I was cooking breakfast for the kids. And this was about 10 minutes of work. So this is an example of the kind of thing that. Righty, so let me see. I was just looking at these questions. I think we've gone through a lot of them around reliable information. I think you can stop sharing. I'm seeing myself. Are you gonna share more on your screen, Josh? No, I'm good. Oh, am I still sharing? I thought I'd stopped. I'm sorry, my bad. All righty, but I think people know it's verify and we can't know the types of errors that it's going to make. When's it safe? There is one article I will put in here if I haven't already, Harvard Business Review and that's where we got that matrix, the risk benefit from. Harvard Business Review talked about this in the early days of AI being available. Oh my goodness, what are people gonna use this for? And helping organizations to find their use cases. That is kind of one of the tasks that needs to be done is in your organization figuring out what are we, how do we want to use this tool? I do encourage people that are thinking about how would we roll this out at our organization or in our department to really talk to stakeholders to get to know what are those kinds of time consuming tasks that they'd love a first draft before and then vet those. And I realized that another question, I'm not seeing it right now, but one of the concerns that comes up often when I talk to people and nonprofits about AI is what about people and their sense of is my job, is this a, what about the future of my job, right? And there are a couple different ways to look at that. Number one, think of AI not as a tool to replace humans, but as a tool that augments the work we do. That's a very different mindset. It's not like we all have lots and lots of spare time in our work day, right? So if I can save some time writing a first draft so that I can spend more time writing a higher quality draft so that it sounds, so there is more authentically me and sounds more like me, that's a better use of my time. And I'm like cut some of it in half, right? So it's, and talk to stakeholders in your organization about the kinds of work they do and thinking about ways that you could leverage this type of capability to help that. Humans need to be very much part of this, right? There's a, if they are not humans, remember they're just doing math back there. That's all they're doing, all right. Yeah, but is this still picking us questions or are we pulling our own? I just jumped in here. I think we have some more submitted questions. Here's a pretty good one that might be. So the question is I'm curious to hear about potential issues that might arise by encouraging staff to use AI tools such as generative AI, specifically ethical gray areas and guiding principles for staff. You want to take that first, Kim? You want me to grab that one? I mean, I can take the first jump. My first answer to that is great question. Thank you for asking it, whatever you did. Again, this comes back to your AI policy having a discussion, having what I think of as an intentional AI rollout. And that means doing it with a sense of, how do we want to train staff? What are we gonna, what kind of guidance are we gonna give people around what tools to use? What kind of use cases are okay? What kind are not? And giving people clear guidance is kind of the only way. It's the way to, I don't know if you said this already, Josh, but this is a Joshism that you say very well, but it's a way to get the most benefit and mitigate risk all at the same time is having a clear set of guidance. We can't stress that enough. And that is, and it's the title of this as the experts, right? That is the governance, right? And this is going to change. So prepare to kind of create a policy for your organization and a set of, you know, guiding principles that you may be adapting. So if you wanna add to that. I think you covered that. I mean, I think we've been talking a lot about that. I think some of it is to me kind of easy, which is like these are the tools we want to use don't violate copyright, understand that, you know, the data reflects existing biases in the data from the real world. So you have to allow for that and some basic training. I think where it gets complicated and more around the organization is really around the issue of transparency. So I think in many respects that ethics are straightforward, not easy by any stretch, but they're more obvious and straightforward to address. Transparency is trickier, which is if we use AI to create something, do we say that we used AI to create something? How do we say it helped? Do we say it completely made it? And if we're using AI to make decisions, to write reports, are we doing that? So, you know, we have established a standard around table that if we're using AI to produce stuff, we're going to say that AI was involved in the creation of this thing. We're going to be transparent about it, at least that much to say we did use AI for this. I think that's something that is less clear as to what's the ethical right thing to do, but the organizations, I think, want to confront that and decide. Thanks. And I think along that vein, there was another question regarding what is a way to ethically train AI that doesn't involve the unlawful use of people's creative works? Is art or writing generated using AI or original work? I think that gets back to that earlier question around whether the large language models that you're using were trained ethically. I think fortunately, the courts are going to make this better. And if systems are going to be trained on copyrighted data, then they will have to pay some sort of licensing fee to those copyright holders. And New York Times is quite famously sitting open AI for quite a large sum of money right now. And I think it will become financially untenable and legally untenable for these systems to just coover up everything that is publicly available and say, well, it was publicly available so we can use it to train our models. I think that's what happened. And it's unwinding now. So I hope that answers the question, but right now most of the systems were trained on copyrighted private information that just happened to be available via the web and just got hoovered up. Yeah, I think that makes sense. I think it might be helpful for some organizations to know maybe what errors to look for and what are the limitations there? One of the things I would certainly look at is that unmasking AI book that Kim dropped a link to. And I think that really addresses a lot of some of the real societal harms and things that can be done. I don't know if you wanna talk about that, Kim, because I know you just finished that book recently. Yeah, I mean, it's not a perfect tool, right? I mean, here's the thing. Here's the analogy that maybe I'm betraying my age, but probably not still. Remember when the internet first created its appearance and there's a lot of like, oh God, there could be garbage out there. What's gonna happen with copyright? This is think of AI as a new tool on that scale. So as with the internet, and some of the things we needed to learn about it, we're not that good, I think the first one to say, right? But it is a new kind of transformative way of working. And I think, I mean, I don't know that I'm answering this question, but I just think you need to think about it on those levels, right? And kind of where is your organization's apathy? Really, do you think internet in the 90s, it's that much of a change? And so a lot of this is just new and is going to be sorted out. And you almost kind of need an ethicist of sorts for some of these kinds of questions. For example, if I'm, things that clearly are unethical right now, right? If I go into an AI system and I say, write me something like Kurt Vonnegut or write me something like Dr. Joy Bill and Weenie, right? That's just unethical, right? I'm asking it to recreate the writing of a particular person who has copyright protection, right? If I say, make me a painting that looks like this living artist's style, right? That's just really obvious. So that part's, I think, easy in terms of avoiding some of the ethics. I think it gets harder, right? If I ask for an image of a group of South Asian children playing, right? Do I have the ability to evaluate that photo and decide if it's really offensive? If it's, you know, doing stereotypes, it's like there's a lot of challenges around that. So it, I keep saying some of it's easy. Some of it really is like, well, how do we deal with this organization? That's why you have to be using them and thinking about it and having these conversations. Yes, because it's a group of nonprofits. I mean, Dr. Joy Bill and Weenie, I realize I didn't answer that part of it. So she kind of, she tells the tale of the development of these tools. And while things have happened in like miracle time, right? The changes have been astronomical and expected to continue changing. She maintains, and I recommend this to anyone who's here who registered for this session, right? That, yes, there is a lot of built-in bias in our world's data. And the, you know, we have to keep calling it out. The thing, the good news if you can call it that is that there are groups that are dedicated to this, right? So there's the Algorithmic Justice League, which she founded. There's also someone had asked a question here about how you can, about how, are you leaving? Okay. I got a jumpkin, but continue on, thank you so much. Oh, it's a good clock, my goodness. I'm so sorry. Anyway, someone asked like, you know, as nonprofits, how can we help shape policy? Get to know some of these organizations that are working on policy. See what they're doing. Check out the Algorithmic Justice League. See how you can be involved. The more human-centered people that are doing this, the better we are as a world. So hello. Well said, Kim. Well said, Kim. Do you guys want to stick around and ask more questions, Janet? I'm open to it, whatever you guys want to do, Kim. I mean, I can answer a couple more questions if people have questions that they want to unbuy or I think I dropped the sources that I wanted to share in here. Yeah, if anyone has a burning question, if not continue to ask. We can't, there's a lot that's still in the mix and being sorted out, right? A lot of these questions are on copyright. Goodness, around data privacy. There's gonna be a lot of questions here, but be engaged in it and be engaged in the conversation. These are largely not technology conversations, but questions around, again, asking questions, ethics, principles, those things. Well, again, we'd like to thank our sponsor, Microsoft for this webinar today and our experts, Kim and Josh, thank you so much and thank you for attending and we'll see you on the next one. Have a great day, everybody. Bye-bye.