 So on behalf of my group, I'm extremely excited to be presenting our product, which is known as AI blind spot I actually wanted to start by just reflecting a little bit looking back on March We started this program in the middle of March within initial two weeks and right after that There was an amazing flurry of activity where major tech companies were making all these bold declarations of major Statements that they were making and what they were doing to address AI ethics in the span of two days You had Microsoft announcing they were gonna add an ethics checklist to their product release You had Google announcing they're now infamous ethics panel You had Amazon announcing a partnership with National Science Foundation to study AI fairness But as I was watching these headlines come out you got the sense that this is what was really going on because Right about the same time I was starting to have conversations with several tech companies with people who are involved in ethics initiatives where even ones who are really trying to do their best to implement changes to study bias in AI systems and Frankly a lot of them just didn't know what they were doing as they openly admitted because there were just no structures or processes for Assessing bias in AI systems There were a lot of tools that come out in the past year including those that came out of assembly like the day nutrition label and other It's like model cards from all reporting from Google But you need both tools and you need structures and processes and that structures and processes are what we decided to address with AI blind spot AI blind spots is a discovery process for spotting unconscious biases and structural inequalities in AI systems There are a total of nine blind spots and when I say blind spot I'm referring to Oversights that can happen in a team's natural day-to-day operations during the course of planning building and deploying AI systems There are total nine blind spots you see in this diagram on the left It all starts with purpose right in the middle because everything in AI systems should always come back to purpose and what's being designed for And then if you start at the lower left and go clockwise around the other blind spots are representative data abuse ability Privacy discrimination by proxy explain ability optimization criteria Generalization error and right to contest and again These are all cases where Oversights can lead to bias in AI systems that in most cases are going to harm vulnerable populations due to unintended consequences But a blind spot. It's not just a fancy diagram with lots of nice colors We wanted to turn it into an actual tool that teams could use so we created these blind spot cards And we created these because we want to design something that was a little bit more accessible There are also a lot of impact assessment tools that are coming out and I can say from personal experience But they're very cold and technical We wanted to create something that was a little bit more light and accessible that teams would be a little bit less Intimidated by by the way this photograph is courtesy of our professional photographer Jeff and our professional hand model on you I'll walk you through the layout of the cards The left represents the front side of the card and the right represents the back side So it starts on the front with a description of what this blind spot is and doing our best to phrase it in non-technical language so we can reach different audiences and then on the back side We have a high view considered section that talks about some of the steps you can take to address this blind spot So in the case of explainability examples could include surveying individuals users on whether they actually trust recommendations made by our AI system Considering different types of models that may be more explainable than others Factoring in the stakes of the decision Are you just recommending a movie to somebody or are you deciding whether somebody's gonna get a home loan or not and then potentially modeling counterfactual scenarios that when it enable people to see what would have to change in order to achieve a more desired outcome and Then we provide a case study to give a real-world example of where this blind spot arises and Potentially or in many cases has harmed vulnerable populations due to oversights the company is made and Then there's a have you engaged with section that highlights specific people or Organizations that you may want to consult with due to their expertise either within your own company or organization or outside And then there's a take a look section that provides a QR code that will take you to different resources That'll help you address this blind spot and then this shows our website that It's amazing actually that this video I record this morning It's now out of date because Jeff keeps making so many changes to the website but it shows the card shows on his hand and then they'll enables users to Just explore the different blind spot cards and if you click on one like explainability here It'll show you the same content of the card and it'll show you the actual resources that are behind that QR code You can take the link to different places to learn more about this blind spot or how to address it And then a what is missing a button where you can provide suggestions It's a good thing we added this because we've actually gotten a feedback already We got our first feedback from somebody at the University of Washington who I think mostly had good things to say fortunately So with that we wanted to give an example of a case study of how this could be applied in the real world this is a Semi-fictional case study it may or may not have been informed by an actual Incident that happened at a major tech company. I may have mentioned earlier in the presentation But so hypothetically, let's say there's a tech company that has a lot of internal data on their historical hiring practices And so they want to use AI to identify candidates for software engineering jobs So they go to their data science team and they say Okay, we want you to build a model that will help us screen through resume so we can fill these software engineering jobs So the data science team does that they build a model they deploy it But then they realized that they're just getting white men being recommended So what happened there and more specifically how could AI blind spot have prevented this? So I'm gonna give examples of one card from each of the three stages the planning building and deploying stage Again, it all starts with purpose and really asking yourself. What are we trying to accomplish here? This would involve talking to the team about are you why do you want to use AI? Like are you just trying to get through resumes faster or are you trying to identify better candidates? Or are you trying to increase diversity and then really asking yourself is AI really designed to achieve all those three goals? If you just want to get through resumes as fast as possible then AI may be able to help you with that But you have if you want to identify better candidates you would have to question your historical hiring practices and Certainly if you want to increase diversity AI may not be the right tool for that So we encourage teams to really question if AI is even suited to their purpose But in this case, let's say that the team says okay, it's number two We really want the best candidates and we really think AI can do that So we move on to the building stage and then address the issue of discrimination by proxy That refers to situations where you may not include features like race or gender or other protected classes in your model But you may have other features that are so highly correlated with race or gender such as historically black colleges or all women's colleges or Sports like lacrosse that white men are more drawn to and features that are so correlated that ultimately leads to discrimination And we would encourage the team to consult with social scientists or human rights advocates who are just more knowledgeable About historical biases and can help you identify certain features that may be problematic that could lead to discrimination So let's say the team has done that and now we move on to the deploying stage In this case I'm even going to give the company the benefit of the doubt and say that they actually want to increase diversity And they realize that AI can't do that So they realize they have to go back and fix their recruiting pipeline first by getting more diverse Canada pools And then maybe to think okay now I can help us increase diversity But that's not actually the case because that brings up the issue of generalization error Where if you have a history of not recruiting diverse candidates and now you do Diverse candidates the model that was built on historical data is not going to be set up to evaluate new candidates with different backgrounds So you'd have to consider something like maybe an anomaly detector that enables you to identify Circumstances like candidates that have more unique backgrounds where AI is just not suited to where you do need a human to review These are just suggestions But this gives you some ideas of ways the teams could work through the building Planning building and deploying of AI systems to identify their blind spots and understand and brainstorm how to address them So with that said what's next for us as a team we have a lot of ideas of potential use cases for this Some of which we got from peers in our cohort I can see a lot of potential uses in different settings such as a product manager Leading a design sprint and seeing potential use of the blind spot cards to help through the design thinking process It could be a new director of data science at a startup where there aren't really structures or processes for how data scientists go about their job and blind spot cards could potentially help guide data scientists work On the other hand it could be a city task force that's responsible for auditing AI systems But it has a less technical background and similarly needs some sort of guidance on what blind spots to be looking for And there could be other potential uses as well So our plan for next steps is to engage with users Doing user studies to figure out what the best audiences were and then hopefully getting testimonials From organizations where the blind spot cards have been a help helpful in helping them assess bias in their AI systems and then in the grand scheme hoping that this could be part of some certification process through an organization like IEEE where possibly in Setting up some standards where safe an organization has processes like AI blind spot combined with tools like data nutrition label They could certify themselves as using AI responsibly. That's kind of our Long-term goals that won't happen anytime soon But we're we see a lot of potential for what this could do But we wanted to close with the Joker card. That's one of the blind spot cards All of you should by the way should pick up a set of blind spot cards We have them on our table outside, but the Joker card kind of represents the idea of the unknown unknowns That you know, we've identified these nine blind spots, but there are other blind spots, too That we probably didn't think of we've identified potential use cases But there may be other ones that we haven't thought of yet that may be some of you in the audience think of So definitely come talk to us if you have ideas for where and how this could be applied because we really see potential to Kind of help those organizations. I was talking about at the beginning that really want to Evaluate their systems for bias as best they can and just don't know how to do it So with that I thank you very much