 Thank you again for being with us at the future of work conference. I want to go ahead and introduce our speaker today. The topic is internalizing data equity into your nonprofit organization. Kai Williams is an expert with LA Tech for Good, and she offers a series of equity and ethics workshops. She's also executive director at a nonprofit. She knows all your paying points, all your robes, pains, and everything. We appreciate you being here, Kai, and welcome to TechSoup. And thank you for being here with the future of work conference. Thank you so much. I'm Kai Williams. I'm the executive director of the International Wildlife Rehabilitation Council, and I'm also a workshop facilitator for LA Tech for Good. And I'll mostly be talking about the work I do with LA Tech for Good, but bringing in a few examples from my work with IWRC, which is the acronym for the International Wildlife Rehabilitation Council. So we will really, OK, so today's session is really a plea. It's a plea to use documentation to internalize equity within your nonprofit operations. And I hope through this next 45 minutes or so, it becomes really clear both why that's important and giving you a bit of a roadmap of how to do it. So talking about equity and ethics and how that fits in the algorithms and AI and all of the other things we've been learning about in terms of future of work, making the case for you, making the case for your organization and how to implement it. And then we'll wrap up with some good Q&A. So lots going on. So LA Tech for Good, LA Tech for Good's mission is to foster social change by equipping data professionals with equitable and ethical data practices. You might think it first, OK, professionals, that's not me, that's the people on the left, like Rachel Whaley, who is the data equity program manager at LA Tech for Good and also a consultant. And yes, those are some of the people. But if you keep reading the mission of LA Tech for Good, it includes they define data professionals as everyone, all people who use data. Well, then that includes me, probably includes a lot of you. So I'm over there on the right as a data user. I am not a data scientist. But I use data all the time in the governance and the management and the programming of my nonprofit a lot going on. So LA Tech for Good really connects us in the data community and shows that the data community is not just data scientists. It's all of us using data. And wow, at nonprofits, we use a lot of data. So I left this next slide in, even though this is maybe not the most interactive of talks, I thought it was still a really helpful slide because when the chat has been really active in the other sessions, and I hope to see it being active here, ask questions in the Q&A, or ask questions in the chat. There's a great possibility here for peer learning as well as what I'm talking about. There's a lot of experts here listening to the talk, a lot of people with great experiences. So share what you know in the chat. Use I statements and listen to your peers. And I know data equity, data practices in general can be one of those. Some people either find it mind-numbing, some people find it a bit overwhelming, but bring an open mind in a growth mindset. This talk specifically is not gonna be something that is overly technical. And it's something that we can all use, whether you're an executive director of a nonprofit or you're a program coordinator. One of the things that I've found fascinating about doing this work with data equity and ethics is it really shows how much data work permeates all of nonprofit culture and how important it is for all of us to question data no matter where we are. And finally, LA Tech for Good is always interested in feedback and improving what we're doing. So very open to feedback. So our objectives, we're gonna talk a bit about equity and how equity and data go together. Then we all know biases are something that we're familiar with, but we're gonna talk about biases in terms of data, in terms of the power dynamics that get reflected in data and how all of that shapes the design of algorithms and what we're doing and then shapes the work we're doing in nonprofits. And finally, and I'd say most importantly, we're gonna be learning some practical methods to embed ethical practices into our data projects. And yes, I will talk a bit about what data projects are because that was something that was a little intimidating to me when I started working with LA Tech for Good and they asked, what data project are you working on now? And I was a big guy from that. So let's start off with a warm-up poll here. How familiar are you with the concepts of data equity and data ethics? Put a number in the chat from one, they're brand new concepts to five, you can confidently define both terms. Let's just get an idea of where we're all at. Nice to have that check in. Awesome. So I see a lot of people where it's either new or they're kind of the middle ground. Okay, great, great to know. Definitely can, yeah. And then a few people who've got a lot more experience, that's fantastic. So a great mix of people. And like I said, I think sharing some of this information in the chat and some of our stories is really gonna actually help bring a lot more to this session. Excellent. All right, so why? Why does this matter? Why am I talking about this? Why are we talking about this as a future of work? It matters because we're all humans. And as humans, we have bias. And I think as people in this field, we tend to know that's the case and manage for it. But what we don't always think of is that our data, because our data comes from humans, our data also has bias. And we need to be able to, we need to be thinking about that in order to, as Rachel Whaley and Jen Holmes say on the LA Tech for Good website, on a blog post maximize the positive impact and minimize unintended consequences. So what Rachel and Jen are saying is that's what data equity does. Data equity helps maximize the positive and minimize unintended consequences. And when you think, okay, it's an oversimplification, but when you think of our missions as nonprofits, what are we trying to do? We're trying to maximize good and minimize harm. And we'll talk a lot about how data fits into that. And so this is very core. This is not something that's a tack on add-on. This is core to our mission. Data is power. That probably, you can think about that a moment and it starts meaning things to you. And I encourage you, if something immediately comes to mind, put it in the chat. What does data is power mean to you? And while you're doing that, I want to read a quote of one thing that data being power means. And this is from a book called Data Feminism, which is available actually online. You can read the entire book online from MIT Press. And it's a pretty phenomenal book. I'll be referencing it a few more times in here because there's lots of really interesting things going with this program. But this particular quote is from Maria Munir. And Maria says, if you refuse to register non-binary people like me with birth certificates and exclude us in everything from creating bank accounts to signing up for mailing lists, you do not have the right to turn around and say that there are not enough of us to warrant change. So again, that's Maria Munir being quoted in Data Feminism. And data is power. So if you don't give people the opportunity to say who they are, you can't say they don't exist. We're gonna get into this a bit more, but I think it's a really powerful quote to show how powerful data is. What gets counted counts? That said, there is what they call in Data Feminism the paradox of exposure, where those people who have the most to perhaps gain from being counted are also at the most risk from being counted. So think of the 2020 census and that there was that push to have citizenship data in there. There's a huge risk having that data in there, even though there might have been other interesting things in terms of how can we best serve people or think about a survey perhaps for unhoused individual. Asking if the survey is asking, do you have the ability to feed your kids three times a day? There's a lot of good information in there in terms of what perhaps a food bank or other facilities could do to support people. But that might also have these implications for DHS if for child protective services that somebody might feel uncomfortable putting that information in there because of that. So people who most need to be counted often also are most at risk. Some understanding and thinking of the unintended consequences of what we're asking and what we're doing and how we're asking it, where we're storing it. There's a lot. There's a lot of important stuff in there. So you can see data is power. Can't say that enough. So when we're working on this, there's a flow to the work. And it starts with design. So it starts with design and context. That's our floor. That's our base. That's where we need to start. And then it flows up into thinking about the data equity and ethics and working on that. And if we're working with machine learning, if we're working with AI, then that's where the CAX is up there in the AI ethics. But it needs to start down there into design and thinking about the context. I really like this next. This tech quote here is just, I think it's really nice. Data is information made tractable. I love pairing it too with this picture. Because when you look at this picture, we immediately, I immediately start classifying things. You know, I see the road. I see the power lines, the power poles. I see the speed limit sign. That's when making sense of this picture. I'm just, I'm listening. I'm classifying everything in my head. And it makes sense of it. I need to do that to understand this picture. And we do that with data. And also looking at this picture, some of those things that I'm classifying are artificial boundaries. The road, the speed limit signs. You know, those are artificial boundaries, but they are critical for us to function as a society. And it's the same thing with data. Classifications, hierarchies, categories. We need those to make, help understand the information and to function as a society. But the issue is the real danger is that once we create these classifications and we accept them, we rarely go back and question them again. And that's why there is a need for this work. To put an equity lens on our data usage and our data classification. So think of things like the question you might have seen in the 80s. So what's your gender? Or so select your race, choose one. And the gender is only male female or you can only select one race. There's a lot of people that's leaving out. A lot of people. And we just, once they're created, once they're there, it takes a lot for us to start questioning them. And we need to be doing more of that. So we need to dig in and we need to question. And as we go forward in this talk, we'll talk a bit more about how to do that. So going back to that pyramid though. So it really starts with our design. Before we can obtain equity, we need to think about our design. We need to think of the process, thinking, questioning, exploring. There's a lot there. These are some questions here from Sasha Kostanda Chok, who is a data scientist and data expert. Actually, before I get into these too much, I wanna talk a little bit about what a data project is. Cause there is a technical term for what a data project is, but what we need to know in nonprofits for what we're doing is that a data project is the collection, the storage, modeling, reporting on or analysis of data. So in nonprofits, what is that? And please list some things in the chat that you're doing that you might think are data projects. But some that come immediately to my mind are surveys, dashboards, needs assessment, impact analysis, we're working with data. We might not think of them as data projects, but we're working with data, we're working with these projects all the time. So when we're starting to design them, think of these questions that Sasha suggests here. What story is told? How is it framed? Who decides the scope? And think of them at the beginning of the project. If possible, think of them when you first become involved, but even if you're not thinking about the beginning, it's still helpful even halfway through the project to think about these, even if it's for the first time. And honestly, it's also really helpful to come back to these. Things change in our projects. We're moving forward, we forget about things. So coming back and looking at these questions again can be very important in terms of keeping us online and keeping us going in terms of equity and documented. But we need to interrogate our data and documentation is a fabulous way to interrogate the data. So not just answering those questions, but jotting down, even if it's on a piece of paper, it's on a Google sheet, jotting down those answers. Thinking about it, being able to come back to them. So this is again, I said I'd come back to data feminism and this is a screenshot graph from data feminism and it's from chapter four, which is my favorite chapter. I've read it four or five times and we'll read it many more. But the graphic here is from an interactive graphic that was in the Guardian, where people could choose what they're, they could choose some demographics about themselves and see how many people in the US Congress were like them. One thing they did when they set this up is they put, they chose for gender, I know this is, I think the screen is a bit small to see, but cis male, cis female, and then trans and non-binary. Well, if you click trans and non-binary, then it immediately shows zero people are like you in Congress. And that's an interesting and powerful thing right there. But then if you're thinking about it and thinking about the data, you're thinking, do they, have they been asking people in Congress? Have they been giving them that option of gender or have they just been giving them two options of gender? And even if they were giving them three options, these options of gender, I was everybody comfortable saying that. So you start looking at this visualization and you start thinking about the data and you really see that it's a way to interrogate the data and not just to even see what's on that surface, but to really think about it. And I also wanna bring this up because it comes into some considerations when we're collecting data. And that's why on the left-hand side there it says counting, consent and context. So it goes back to that paradox of exposure. Those who might have the most to gain from being counted often can have the most to lose. And so consent to being counted is very important. And there's a few models I wanna suggest for that, such as on the legal model side, GDPR I think is a great example. It's the one that IWRC uses in our work. We decided to use GDPR is global data protection. We decided to use it not just for our constituents in Europe who are covered by it, but it uses the model for everyone. So I think that's a really great model, but there's other models as well. There's even non-legal models. I wanna suggest the fries model, which is consent should be freely given, reversible, informed, enthusiastic and specific. And the Allied Media Project out of Detroit did a really interesting thing using the fries model when they were doing their website analytics design. So in setting up their website, thinking about cookies, thinking about all that analytics, they actually use the fries model as how are we gonna collect this data? Whose data are we collecting? What do they know about it? And it went a couple of steps beyond GDPR. And I think it's really phenomenal. They, in a medium article, they talk a bit about how they made that up and we can get that link in there for you later. And as well as consent context, what is the context that you're using this data in? What is the context that somebody else might use this data in? There's a lot to think about there. And lack of data can even be a structural problem itself and it certainly can show structural problems. And yes, I know I have a couple of quotes in here twice, like what gets counted counts, but I think they're really key quotes. And so I want to highlight them. They're done on purpose multiple times. So in data feminism, they have created some metrics around their book. They had values related to their book and they wanted to make sure that they were really thinking about those values and meeting them and not just having it as this idea of this is what we'd like to do. So they created their values, they designed the metrics around them, and then they published them. And you can actually find these in the book online in the appendices. So they look at structural problems like racism I think they chose seven structural problems that they really wanted to be looking at. And they set aspirational metrics for their book. So in this case, they said that they wanted 75% of the citations of feminist scholarship to be from people of color. So that was their aspirational metric. And the first time they counted was in their open peer review draft. And in the open peer review draft, they ended up with 36%. From people of color. And in their final copy added manuscript, they ended up with 32%. And that was partly because during peer review, there was a lot of suggestions that they needed to have more cited information. And when they went to look for more cited information, they were having trouble finding enough from scholars of color. And so not only did this look, they obviously did not meet their aspirational goal there, but they still put that out in transparency for everybody to see. And they talked about why they didn't meet it. And they also uncovered some other issues in the fields in terms of structural problems, such as the academic ability, the academic access for some of this and made all of this very visible and allowed for discussion to be created. So metrics are one way that you can also interrogate your data. It's a metrics can be a way of documentation and a way of interrogation. IWRC is actually looking at doing that with our value. We have values that we wrote. And after reading that and data feminism, I thought, oh, that would be really interesting to do. So I drafted up a page. I'm trying to think of what metrics could we use to be living our values. And I ran it by the boards. And the board was a little bit concerned because they were like, oh, well, I don't know that we wanna put numbers out there like things and won't people just concentrate on the number and not what we're doing. So it was a really interesting discussion and we haven't finished it yet, but it's been a great exercise. And I still hope that we actually have it out there in the wild in some form in the next few years as we're looking to connect those values to what we're actually doing and just see where we're at. So documentation, I hope some of the shows that documentation really is a tool for equity. And we'll alluded to this a few times and I think I've seen a couple of things in the chat where people are very much aware of this, but we need to think not just of the data when it's in our hands and what we're meaning to do for it, but this wonderful quote by Shannon Baller and Brian Greene really talks to what happens to the data after it leaves our hands. That is part of our ethical responsibility is considering and planning for what happens to the data after it leaves our hands on purpose or not. Jumping back to this from the Guardian and from chapter four of Data Feminism. I want to highlight this time the privilege hazard that might come in when, from the people who are designing that survey, perhaps the person designing the survey was cisgendered and just didn't even consider the fact of, oh, we should ask for more than two genders. They didn't see the harm, so that privilege hazard had not seen the harm. In documentation and interrogating our data and working through this in frameworks is one way to help us account and design for these privilege hazards. And as I said, counting needs to be, always counting needs to be accompanied by questions about consent, personal safety, cultural dignity, historical context. We need to balance the harm and the benefits. It goes back to that oversimplification of our missions and what we're doing and how data actually can be very key to us meeting our missions or accidentally doing harm. This is actually what brought me in to working with LA Tech for good. I was filling out a Candidate Survey, one of those annual surveys for GuideStar and that year, I think this was 2020, they were asking a lot more demographic data than they'd asked before about our organization, our board, our constituents. And it really made me think, because I thought, well, I haven't been asking this information of our people because we're a wildlife rehabilitation organization. We support people who take care of injured morphin animals and provide them training and resources. We shouldn't be asking them information, it's not related to that. We don't need to hold that information for people, it's not our business. But as I was going through this and doing this for Candid, I was thinking we're also really concerned as an organization about equity and inclusion in our fields and opening things up. And if we don't know who is in our constituents, how people identify who's there, then well, we don't know if or who we're missing and we don't know when we're doing better. So it really started making me think about how do they think, oh, well, maybe I should be grabbing these demographics and putting that out there and asking people about it. Around that time, I saw an LA Tech for Good workshop advertised and joined it thinking, okay, that's gonna really help me figure out exactly what questions I wanna ask, how I wanna store it, how I wanna present it. All of those questions you think about when you're working with data. But what it did after the end of the 30s would make me think, oh, I have a few more issues than this. IWRC is an international organization. Some countries, some of the things that we were considering asking were against the law in those countries. And did we want to have the responsibility of holding that information? Where is it was somebody else had access to it could really do harm to people. So it made us take about three giant steps back and say, okay, should we be doing this at all? And the answer was yes, we should be doing this. We do want to be doing this, but we need to be exceedingly careful about consent as well as data storage. We also might say, these questions are things we don't wanna ask because they are too sensitive. And while yes, it would help us in terms of our equity and inclusion to know this information, it's not worth the possible risk. So really thinking and going back to that design rather than just starting the implementation process. And that brings me to a wide documentation. Accountability via our values, via metrics, whether they're aspirational metrics or metrics of what we're doing, audits, internal audits or external audits and oversight. Documentation provides all of those things and leads to them. A documentation builds in ethics checks. Another example from the International Wildlife Rehabilitation Council is that we have a board where people can post animals that are unable to be released that might be in permanent captivity. But not every animal that can't be released really should be in long-term captivity. And what we're trying to figure out to do on the technical side is to, before somebody can post an animal there, we want them to go through a list of questions that they have to answer. We haven't even decided if those questions are gonna be made public, though the answer is gonna be made public or it might just be for that person. But it's gonna walk them through the ethical side of what should, what is best for this animal, what is best for the animal well for a point of view. And for data as well, documentation is a way. It builds in ethics checks. It makes us think about these things. Documentation creates transparency, especially when you'll see a lot of this, where people are putting these on their websites, like the Alley Data Project, the one I'll come to in just a little bit, Data for Black Lives, all of these, they're putting this information out there, and it creates transparency. And finally, documentation helps us identify the gaps in the assumptions, in our data, in our work, in our programs. So it's really important. It should not be, I know we're all exceedingly busy at nonprofits, and we do not have time. We're so focused on doing the program, getting it out there. But that's why I said this talk is really a plea to consider documentation, and it is core to what we're doing. How do we do this? We don't have to do this in a vacuum. And a lot of this has been done. Fortunately, there's some nice templates for us to work from. And if you think of things like, Timnit Gebru, who works for Data Sheets for Datasets, says, in the electronics industry, that's just done. No matter how simple a component is, it has a data sheet. Think of commercial food. There's a data sheet with every commercial food product you're buying. Or to me, I go back to high school and I think of the chemical, the safety data sheets that would come with any chemical we are maybe using in a chemistry class. And in the US, OSHA actually requires those documents to be used there. And it talks about what are the components of the chemical? What are the safety considerations? Is it flammable? How do you get treated if something happens to you with it? What are your, you know, what sort of personal protection should be using? And we can do the same thing for data. So here's a very prose-based example. And this one is from Data for Black Lives. What they did is they consolidated state-level data to explore the disproportionate impact of COVID-19 on black people in the US. But before they started really doing that, before they started digging into that data and doing that, they first considered the equity and the ethics implications. And they wrote it down. They wrote down what their intended plan and purpose was. And they also wrote down what they did not want to be doing, which is just as important. So this is a really excellent, kind of more prose-based example of what you can do and how you can think through it. And I certainly recommend taking a look at that. But there's also a lot of other, there's a lot of models. So here, and then we're putting these in the chat as we go. So here are some examples. And I'm just sorry about the slides looking like they're wanting to be edited. Framework, there's a number of frameworks. So the Data Nutrition Project was created mostly for AI work to combat bias in AI. But it can be used for non-AI projects really quite easily. The example here is tax bills in New York City. But if you go back from that example, you can see where their template is so you could use this yourself. The Data Ethics Canvas is from the Open Data Institute. And it helps you really consider ethics in all stages of data work. I like it because it's a bit more of a visual, colorful. And I see it as a higher overview to let you, no matter where you are in your work, to really start thinking about these questions. It's not overwhelming, I would say. Now, data sheets for data sets can be overwhelming. It is an amazing list. There's also a paper that's available online related to it as well. But data sheets for data sets was made from machine learning, but it really works for all sorts of data sets. It's a series of questions and it has five or six questions in each category from motivation, composition, the collection process, the recommended uses of the data, the distribution of the data and the maintenance of the data. So everything from, oh, just that basic idea of why might we want to be doing this to what's going to happen after it out in the wild. And it takes you through a series of questions. I really recommend looking at these, but also not being overwhelmed by them. The first step when you're using one of these frameworks is to decide which questions you would answer if you were to create a data sheet for your data set. Not all of them are gonna be applicable to you. Not all of them are gonna be equal importance to your work, depending on what you're doing with your nonprofit. So don't feel the need, you know, when you look at these, don't feel the need to select all of them and do all of it. It's get a start or maybe with, actually I'll talk about that in a minute. So data is something that we use. We should not be used by data. And that's something I want you to take away from this is that we make data, we use it. That means we can use it differently. So let's jump just for a quick moment into AI and let's think back to those boundaries and classification. You know, AI and algorithms are really another way for us to help make sense of data and of information and to make things easier. But it's of course, as this lovely quote says, models are opinions embedded in math. And as you've heard before, actually I think I've heard even just in the last two days during this conference from other presenters, data is an amplifier, it's an accelerant. AI, machine learning, it accelerates, it advances what we're doing, but it's not a panacea, it's not a magic solution. We need to be having the thinking behind it. You know, we need to think about, so some of the questions to consider are, where is algorithmic processing employed? Where should we be using human decision makers? One, there was an interesting case study where they were looking in Pennsylvania at using machine learning to select what children needed to come into state custody into care and looking at how that was based versus human decision makers. And there's a really interesting paper written about that. How can we hold algorithms accountable? And I don't like the way I said that. The reason is because we shouldn't be holding the, well, right here, we shouldn't be holding the algorithm accountable, we need to be holding ourselves accountable. And it goes back to thinking of that pyramid. If you remember that pyramid, AI is at the top of the pyramid, the AI ethics. The face of the pyramid is our design thinking in our context, so we need to start there, then work on our equity and ethics data, and then working on the AI ethics and only by starting at that base and moving up, can we really do a reasonable job about mitigating bias in AI? Because it's coming from us, we're humans. Humans are failable, humans have bias and all of this comes from humans. And it doesn't mean that it's terrifying and we should never use data and we should not use AI. That's not what it means. It just means that we need to be thinking about this. I love the subtitle for Beth Tantor and Allison Vines, new both on leveraging smart tech and nonprofits. The one that says, staying human-centered in an automated world. And so I see documentation as a way to stay human-centered to really think and put us in here. And that's really me making the case right there, actually. That's my plea. But ask the tough questions, get stakeholder buy-in, leadership, get it aligned with your mission and your values, your implementing things, your governing, but of course it never stops there. These are steps to take and it's not one and done. It's a cycle. As you're governing, you might be going back and asking questions again and then you're moving through and we all know at nonprofits that stakeholder buy-in is something that we're continuously working on. It's not something that you can do and say, all right, yep, I got them on board, they're there. It's something you have to nurture and tend to. It's definitely the case here as well. But that said, don't let perfection get in the way of action. You can do this work little steps at a time. Just prioritizing better data practices. We can start, we don't have to do it in huge ways. You might even document it in very small ways. So taking action. Well, everybody who's here listening to this is already taking some action because they're learning more about it. Now there's a lot of people who put one and two in terms of where they're familiarity with that ethics and that equity. And hopefully this helps move you a little bit. All the references and resources we put in the chat are ways to move you further, to get more awareness, to thinking about this. And you'll find just in your day-to-day work after having thought of this here, you'll be bringing that in and unexpected places. I saw some great things in the chat about salaries in different places where, yeah, data is important and it really comes not just to the ethics and equity of data, but of our entire work and what we do. So awareness is one thing. And I think just being here, we're all doing that and there's a lot more we can do. And then document. And maybe you're not ready to do a framework. Maybe that's not where you're at right now to take one of those frameworks and really dig into this and take time to do that. But when you're working on a project, open a new Google doc and just jot down, why am I doing this? Who's gonna see it? You know, just a few thoughts like that. And then maybe three days later when somebody else at your nonprofit is asking questions, you can direct them to that document as well. And then maybe they might ask a question and you added the document and you move forward that way. So it doesn't have to be a big, huge thing. You can do this in stages. Every little bit helps. And maybe later, maybe in a couple of years, you're to the point of, oh, this is a big project we're working on. Let's pull out one of those framework templates I learned about and work there, but it doesn't have to start there. And finally, be a voice. Question. Question what you're working on. Question what you're reading. You know, question the data, interrogate the data. That's not necessarily a negative either. But interrogate that data and get involved, discuss, involve others. I think one thing that we're talking about when we're talking about the paradox of exposure, when we're talking about privilege hazard is those unknown unknowns. Those things that we're not thinking about because they're not in our sphere of thinking. So by discussing and involving others, we can help decrease the unknown unknowns working together and get involved. LA Tech for Good is obviously one great place to get involved, but as we put into the chat, there's a lot of amazing organizations doing work on this and there's so many more that are not in the chat. You can just Google them, it's amazing. As we end, and I thank or let us so much for putting this into the chat, but Karen, the executive director of LA Tech for Good has offering a discount code for everybody who is here to, if they're interested in attending a workshop. So it's 10% off the standard registration price. So we've got a couple of workshops coming up. One in November and one in December. I'm one of the facilitators along with Eva Sashar for the one in November. And then Eva Sashar and Dr. Ebony Dotson are gonna be facilitating the one in December. And they both should be really phenomenal workshops. In addition to the 10% code, we know we certainly encourage you to use professional development budget if you have that available to you. But LA Tech for Good does have some reduced rate fees and there is a code, so you can request a code for that. There's just a fairly simple form to fill out to request the code to get a reduced rate. So if you're a really small non-profit grassroots organization, you're a student, that is something that's available. And lastly, I just wanna thank you for listening to me and I hope you got something out of it. I would love feedback to know what you got out of it or what you might be interested in hearing more about. Feel free to email and contact me to talk about data ethics and equity or wildlife rehabilitation. So excited that I was able to be here today. And I'm looking forward to questions. Okay, listen, this was so excellent. The title was spot on internalizing data equity because I think that's what people are doing there. They're internalizing. Because there were comments in the chat and not many questions, but mostly comments. So I do wanna read some of the comments. Alisa said, I personally heard so many confidential situations of financial elder abuse due to data of information. Very embarrassing and financially devastating for the victim. So sad, technology is necessary, but controls are inadequate. That was a good comment. You have a really good comment. Yeah. All right, Kai. And there were some other comments in here. I love one that was put in the chat. Excuse me in the Q and A. I hope that one positive lesson from COVID was learning that so many deadlines are artificial. Who knew life would go on and when the IRS extended the tax reporting deadline? That was from Alisa. That was really, really good. Oh, thank you for putting that in the chat. So I really think that's what people are doing. They're internalizing the data. Several things that I got. You guys write some of your takeaways in the chat room that data is power. And then I did see some comments where people were saying what they thought data was. Yeah, so lots of thank you. From Kathleen, great information and lots of inspiration for the next step. Very good. Thank you so much. One thing you said, what happens to the data after we get it in our hands? That was powerful and made me think. So can you give me some more thoughts around that? When you said that, you're right. We have to think about that. Yeah, I think some things that it's made me think about too is this goes to the model that we're using with GDPR in terms of GDPR has things like the right to be forgotten, but also how long should you hold some of these data before you just delete it? And so that what happens to it after it's out of your hands, it makes me think of how long should we be keeping this data? Where should we be storing it? Can we aggregate this data in very important ways and then remove all of the personally identifiable information out there? We don't have to store that somewhere in case we wanna look at it again. We don't need that. And so those are some of the things that's made me think about is, I think in the US, at least in the culture, I come from there's like, oh, I wanna keep everything. But what if you need to use this? What if this information is important in the future? What if somebody asked for it? And it's really doing this work has really made me realize that I think we'll be okay. It's like cleaning stuff in your room. Do we really need to keep that? No, probably not. So that's one thing that it's helped me with. And as well as just, should I be asking this information in the first place? Bested out to me too. Lots of virtual hand claps in the chat rooms. And so thank you so much and thank you all for attending the future of work.