 Hello everyone, I'm Brian Dumas and we're back with another episode of Three Questions where we chat with colleagues within IBM Research to learn a bit more about their work and the broader implications that work will have on society and business. I'm happy to be joined by Dr. Stacy Hobson, a researcher at IBM who also serves as Director of Responsible and Inclusive Technologies Research. AI is clearly having a moment right now and is a hot topic of conversation as new innovations in the space seem to come online every single day. While the pace at which new technologies are being introduced is exciting, it's also an important reminder that we need to usher in this wave of innovation responsibly and with every person in mind. So with that said, I couldn't be more excited to have Stacy join us today and talk about her efforts to do just that, to make sure that AI is being created and released in ways that represent and support every person equally. Stacy, thank you so much for joining us today. Thank you so much for the invitation, Brian. So let's get right into it. Question number one, tell us about your work at IBM Research. So I lead a team of researchers. There's, you know, researchers here in New York, in Brazil, and also in Switzerland, focused on responsible and inclusive technologies. And what that means is we really want to make sure that technology, AI, neuro technology, quantum computing, any type of advanced or emerging technology is developed and deployed and used responsibly. So when we say responsibly, we really want to ensure that the impacts it has on society, the impacts it has on people are positive. So we want people to be very intentional and aware when they're creating technologies or deploying it about some of the risks of potential harm to individuals, communities, the environment, and so on. And when we say inclusive, we want to make sure that we're really thinking broadly around, you know, people of various backgrounds, you know, social status, ethnicities, gender identities, and so on. So that technology works for everyone, not just the few. I love that. And so what are some real world examples of irresponsible AI, AI that didn't take those considerations in mind and had some negative impacts on folks that actually engaged with the technology? So one of the more recent examples came after, you know, the COVID-19 pandemic. So a lot of schools were closed and students were learning at home through remote learning. And there was an example in the United Kingdom where an AI was used to help create the scores or the end-of-the-year grades for high schoolers. So this became problematic because after the grades were created, we learned that the system was penalizing the poor students so that the students from more affluent backgrounds had higher end-of-the-year scores as reported by the AI system and lower scores for students from poorer backgrounds. I'm sure it wasn't intentional. The person who created the technology didn't, you know, go out and say, hey, I want to inflate the scores for certain people and, you know, reduce the scores for others. It was actually probably inadvertent. But this is why the work that we do is so important so we can bring these things to light very early on at the beginning stages of when technology is being designed or developed and way before it's actually deployed in a real-world setting. How does your team work to make sure that that doesn't happen? How do you all assess risk? How do you all work with teams and developers to make sure that they're thinking about these things and bringing products to market that don't impact people negatively in the ways that you just explained? Yeah, so it's really not an easy task. That's why our team has been working on it and that's why lots of other teams are working on it as well or at least thinking about it. So our team has created a framework that really is used to help people understand, you know, impacts of technologies so that they can kind of anticipate how technology might be used and the type of impacts it might have. We also have a stakeholder impact mapping tool that's now being used with clients. So when IBM colleagues do design thinking sessions or our IBM design colleagues work with clients, they're using our tools and that's amazing because I now see that we started off as a small team doing work here in IBM Research and now our work is having broader impacts and broader usage. So that excites me, it makes me happy that I can see that we have an impact. And our work is not just for IBM Research or IBM but we're really thinking about how others can use it as well. And a critical aspect to us is this inclusion aspect. So, you know, getting perspectives or getting involvement or participation and input from just regular people, people who are not technologists, people who are not researchers because it's critical for us to understand the scope of technologies and the perspective of people outside of that of our own. So we have input from, you know, people from many different backgrounds so that we understand, you know, what are the needs, what are the expectations and how will technology be used in certain contexts. And so let's say that you and your team are working with another team developer and you identify a weak point, a flaw, a risk, a bias. How do you correct that? Is it as simple as changing the code? Is it changing the UI or the UX? Like, what are the steps involved once a potential problem point has been identified to course correct? Yeah, so there are multiple ways that we can address it. It depends partially on kind of what the issue is. If it's in the training data set, if the issue is in how the model is developed, if the issue is in how it's being used. And we look at the different kind of origins of the issues to determine what should happen next. Some, you know, some of the options could be to redesign the code or, you know, redevelop the code or release a new version of the code. Some of it could be using a mitigative solution and whether that's a technology solution or a non-technical solution. So some of the recommendations that we give really depend on the context and the type of technology that's being used. But a real aim of our team is that we can't get involved in every situation. So we want to equip our colleagues and other technologists and researchers with the knowledge and expertise to be able to start doing this on their own. So that as we create technology, we're being intentional and we're thinking about this upfront so that we don't continue putting out technologies that could have differential impacts on people and on society. I get it. I get it. And so to that point about equipping our colleagues with those tools, you've written a lot of papers, you've published some frameworks to kind of establish how people should think about these things as they bring technologies to market. What are the contributions that you're most proud of? What are the contributions that have the furthest reaching tentacles? Tell us about some of the ways that the things that your team has developed are being used in the field right now. Yeah. So we're very happy that we have two upcoming publications at the Human and Computer Interaction 2023 conference that'll talk a little bit more about our team's research and some of the tools that we're creating. We're also planning to release some of the tools as open source. And one of the tools that I'm really excited by is kind of fun is our card tool. So it was created as a game, like a card game that a team can use. So they can play the card game and learn about how their technologies might impact people in a real-world setting. So there are questions that pop up, there are suits, there are game modes and so on and it's a really fun activity for teams to do. But beyond it being fun, it's actually working to help people expand their thinking and change how they're designing and developing technology. So you have this fun aspect, but you have this real-world benefit and positive impact aspect as well. So that's really, really cool. But we also have frameworks and tools that people can use on a day-to-day basis as well. That's amazing. So you kind of gamelify what can be a touchy subject to kind of think about this cultural and philosophical change so that we have technologies that are more inclusive and more responsible. Exactly. I love it. I love it. So I guess, what's next? You know, you're off to a great start this year, very excited and looking forward to the publication at the conference. What else is going on with your team? What can we expect for you all for the rest of 2023 and into the future? So going forward, we're trying to plan a responsible and inclusive technology consortium or working group later this year where we're really bringing together technologists and academics alongside civil society organizations, community members, grassroots organizations, and we can work together to create this vision of technology for the future and to really make it happen. I think it's really critical to have this input in agreement from regular society, regular community members saying, yes, this technology works well for me and it works just as well for me as it does for you or for someone else. And so that's really, really critical. So how do we achieve that? We don't know. We're still trying to figure that out and we're still trying to work for that in the future. But we do think it's critical to have the input and perspectives and agreement of just regular society. Yeah, it's definitely not going to happen in a silo. And so the more people that we can bring to the table earlier in the process, hopefully the more inclusive these technologies that we create will be. And in a non-extractive way. So that's a critical component of our research as well. Even though we want input and we want participation, we want it in a collaborative manner. So it's not extractive. That's really, really critical to us. Amazing. Stacy, this has been a great conversation. Thank you so much for sitting down to chat with us and congratulations on all the exciting work that you and your team are up to. Looking forward to the publication coming on recently and looking forward to seeing the other great things that you and the rest of the team do for the rest of this year and in the coming months. So thank you for sitting down with us. Great. Thank you so much for the conversation. It was an honor. And thank you all at home for joining us as well. Remember, this is three questions where we sit down with researchers from IBM to learn more about their work and the impacts of that work will have on society and business. If you want to learn more about Stacy's work with responsible technologies, then please feel free to engage with the links in the description and stay tuned to our channel for more engaging content. Thanks everyone. Take care. Thank you.