 Hi, everyone. Thanks for joining me live today for a quick session, a little Q&A and highlight of some of the topics that we'll be covering. So for those of you joining, my name is Kelly Combs. I am a director at KPMG out of our Chicago office, and I work closely with IBM in a couple areas. So one around responsible ethical AI, which we call at KPMG AI and control, but also pairing with IBM on the tooling side and looking at what specific tools such as IBM OpenScale can actually help monitor, assess, and manage AI solutions over time. So what we'll be covering today is I'll give a quick little background about myself and some of my thoughts in the AI space. I have a couple questions in the chat box and feel free to enter questions along the way. And then lastly, I just want to remind folks that our longer session where I'll be speaking with IBM Research is May 19th. It's called What's Next in AI at 10.30 a.m. Central Standard Time, 11.30 a.m. Eastern Time, and I'll be speaking with Kush Varshni from IBM Research on this topic in more depth and covering, again, a lot more detail. It will also be a live session. So if you like this, please join us next week for that session. So a couple things about myself, a couple questions that I figured I might answer on this session. So what is responsible AI? There's a lot of buzzwords out there around responsible AI, ethical, trustworthy. And at the end of the day, I think what a lot of organizations are trying to get their hands around and IBM Research is also doing a lot of research around what techniques can mitigate and help solve some of these business challenges is how do we unpack the black box? And how do we establish principles that an organization can adopt that can both be guidelines for what good looks like as it relates to AI, can help define what specific activities and different stakeholders are involved and ultimately provide transparency, explainability and mitigate unintended consequences like bias, for example, or unfair outcomes or outcomes drifting over time into skewed results that are outside the comfort zone of an organization. So to me, responsible AI encompasses really a number of different principles around resiliency. So the security and cyber implications of AI all around the integrity and understanding the data that's used to train AI and how that feedback data continues to allow AI to evolve and then fairness. So, you know, social injustice, data imbalances and looking at bias and how decisions are made and change over time. And then lastly, the explainability factor and how do we obviously apply those principles at the enterprise level to understand where we're comfortable using AI and data and where we're not at the different stakeholder level and how do we define the activities that can support that and then what technology can we use since AI is in some instances self-evolving to help monitor and manage the technology ongoing? How do we begin to digitize those principles and the governance and operating model? So I'm going to see a couple more questions that we have coming through in the chat and I'm happy to continue to expand on some of these topics. So another one that I typically get a lot is how do we think about responsible AI in light of social justice? And to me, there's actually a bit of a difference between responsible AI and ethical AI. So ethical AI to me is how do we start to take human-centric principles, so concepts around what's right, what's wrong, for example, and how do we apply that to the technology layer? So ethics applies to us as humans when we make decisions and how we act and you can see this manifest in organizations as a code of conduct, for example. But then how that translates to technology that mimics human behavior is slightly nuanced or needs to be defined. So a lot of organizations are starting to think through how do I define these broad ethical guiding principles or initiatives? How do I then define the key stakeholders and what activities they can do that translate to the technology layer and then how do I ultimately measure it? So if I have an ethical principle around, you know, treating individuals equitably, making sure that we're looking at social justice and treating everyone equally, for example, that can manifest into AI, for example, in the form of bias and fairness and can we actually have a metric around how to measure imbalances in the data or decisions are being skewed over one segment of data set versus another? So that's just an example of maybe how ethics translates slightly different than the concept of responsible, for example. A couple more questions that are out there. So what's different about client stance around responsible in AI in 2021 and 2022? And are there major shifts that we begin to see happening? So I think, you know, COVID for a lot of our clients has accelerated the adoption of AI in certain industries. So some organizations that weren't as digitally native or savvy, for example, are now, you know, diving in headfirst and sort of accelerating some of their technology agenda. So we are seeing an uptick in terms of AI adoption. And we've done some studies in the past around, you know, what industries are leading the way, clearly financial services and the technology sector are making big bets in AI, they're building AI capabilities, and they have, you know, a couple hundred data scientists, for example, within their organization. And that's been happening since 2019. But then we've seen clients within industries like life sciences and healthcare with COVID also starting to pave the way and build a lot of capabilities. So it's no surprise that if this wasn't on the agenda or maybe wasn't a priority, AI is definitely helping solution and provide insights and areas that we may not have been able to do before. And organizations want to make a very specific and pointed effort to invest. And with that sort of goes hand in hand, how is the regulatory environment changing and shaping up what good needs to look like and what governance should look like specific to AI. So some of the regulatory things that we're seeing is again, for a number of years, there's been a lot of really broad sweeping guidelines, initiatives that are focused really on a couple of things. So one being, how do we compete with China and how do we, you know, increase and upscale individuals in a nation or a country, for example, how do we get funding and build sort of centers of excellence and labs and research as part of a consortium between both the public and private sector. And the other area is really focused on the ethical responsible question in terms of regulation, which is, how do we start to think about how consumer state is used? What transparencies required in decision engines when they're making decisions that could impact and consumers? And starting to see a little bit of guidance on what broad principles could look like and what good governance activities could be. Nothing overly prescriptive yet, other than the recently issued EU draft legislation, which could have some major impacts when it's eventually or if it's eventually, you know, fully passed around looking at high risk algorithms, for example, making sure that your risk ranking and sort of restricting certain uses of data and, you know, basically sort of following the GDPR and trying to tighten up some of the use cases around algorithms within the EU. So that'll be really interesting to kind of track and see how that evolves. And the FTC has sort of commented and strengthened their position on AI, especially around the consumer and social causes. And so I think we're going to see some yet to come stuff on the regulatory front. But prior to the past year, it's really been a bit of more of wait and see or broad sort of guidelines, nothing very prescriptive. And then obviously the focus in research and innovation and standing up capabilities within countries. I'm going to pause and just see if any more comments coming through. Otherwise, I can continue on a couple other areas. So let's see here. Maybe what advice for organizations working on explainable and responsible AI. So I mentioned before, there are, you know, two areas I think that are sort of paving the way in the past year around AI, which is one, the ethical question and how do we look at, you know, making things more equitable, making decisions or these ethical principles that are human centric, how do we measure and make sure that we can understand how they manifest in AI and in technology solutions. That's one trend on the responsible AI front. And then the other is a concept around digitizing governance. So we shouldn't be using manual methods or policies, right? We should start to be codifying and writing certain tests and validations along the AI lifecycle. So the concept of ML ops or AI ops. And there's a number of ways and tools, for example, in IBM's platform, that you can actually start to configure and measure some tests around fairness around data imbalance around how do we see what data attributes contributed to the outcome of a decision. So that trend I think is continuing to pick up, which is can I configure the work bench for the developers or the data scientists that are building these solutions, so that some of the governance checks, and some of the things that we would expect or controls to be in place are configured and are part of the AI lifecycle. And then on the monitoring side, there's again, IBM OpenScale is one of the tools out there that provides visibility past development in the production environment around can I actually see how decisions are being made and how can I surface that information up to both the data scientist and the business user, or the end user that might be responsible for ingesting the output and making decisions over that information. So we've been working in this space and working with IBM for a number of years. There's probably other blog posts and things that you can see out there around sort of our responsible AI framework as well as IBM's very, very similar. And we got started in this topic area, you know, really in response to AI adoption increasing. And as I mentioned before, starting to see some traction in the regulatory environment, trying to be proactive knowing that as these capabilities are picking up speed, you know, how do we make sure that we think about the implications of the technology. So I've been working on this for a number of years and IBM research is doing a number of really cool things, which Kush will be able to speak to in his quick stream ahead of our talk next week. But it's really interesting to see how responsible and trusted has sort of paved the way to now we're sort of double clicking into ethical. We're now looking at again, this AI and ML ops. And we're seeing a lot of trends in terms of use cases and adoption across industries and organizations and trying to understand and help organizations think through, you know, the end result and the implications of that. And I'll see if there's a couple more questions. If not, like I said, this is a teaser ahead of our speaking next week, which will have more opportunities for Q&A for folks. And you can also ask what IBM research is up to these days. I often get asked a question around, you know, what is the biggest risk that we see with AI? And I think that's a bit of a tricky question to ask. So a number of risks that are out there, whether there's security and some of the, you know, things that can happen around deep fakes and adversarial attacks, although those are probably not what keeps me the most worried. I do think sort of understanding the human element and how do we best have humans in the loop and augment our existing skills and the way that we make decisions today with AI is going to be an adjustment and will be a challenge, if not a risk, you know, for those thinking about, you know, how do I work with AI and what does that mean for my job responsibilities? And that's been something that has been around since robotic process automation started being implemented and AI sort of as a complex decision engine continues to challenge sort of what a new new working roles look like within organizations and what is this concept of a digital worker augmenting the work that we do. And so the risk around maybe fully harnessing the capabilities, upskilling individuals, understanding how we can work with the technology is an area that I think will be interesting. And then, you know, obviously there's a there's a lot of risks around some of the impact that AI could have when it makes wrong decisions that are compounded or makes decisions that individuals aren't comfortable with or starts hitting thresholds that we may set around like fairness or bounds that we're comfortable with, like how do we start to think about that risk and address it head on before the problem compounds. I think those are some of the areas that we typically talk about. But, you know, there's a comment here around, you know, what's the upside of addressing responsible AI and a couple good examples. And I think, you know, the upside is really the accelerated insights ability to leverage and harness the power of data to get new outcomes or, for example, predict things that we may not have been able to do before. So one of my clients were actually working to think through how do we predict the next big macro level risk that could impact the organization. So, for example, can we predict the next COVID, whatever that could be, and the power of using both external data, their data and using some different modeling techniques is helping them move the needle on. Can we move from detective and reactive risk management and mitigation and better forecast where to deploy resources to prevent something from happening. So I think that could be super powerful and, you know, is something that the benefits of will be extremely helpful. And then there's obviously been a number of AI use cases for social good causes. So one of the organization or entities that we also worked with was the city of Amsterdam. And when people file complaints to the city about trash removal or making sure that there's resources to keep parks clean, for example, they file complaints to the city, which uses algorithms to deploy where resources should go. Well, without responsible AI, resources are going to be deployed or were being deployed based on where the most complaints were filed, which tended to be from those that had access to file complaints that were in more affluent areas. And now that the city has able to use some external data, put some governance around how the algorithms work and look more precisely at how the decisionings happening, they're able to actually deploy resources more fairly across the city and not just to the areas where the more affluent, for example, may file complaints. So there are tons of examples where good governance can help move the needle forward on fair outcomes and fair distribution of something that's being made by an AI decision engine. So I think I've given a fair amount of a teaser. Thank you all for joining us today. Like I said, really excited to speak with IBM Research next week. I will just plug it one more time for everyone. May 19th, what's next in AI? So explainable AI to responsible AI with Kush Varshney and myself. And we will be speaking at 10 30 a.m. Central Standard Time, 11 30 a.m. Eastern Standard Time. And there should be the ability for you to register for the web series, I believe in the chat or if you'd like to register for the full session, yeah, it's now in the chat and we'll be streaming on YouTube. So thanks, everyone. Hope you enjoyed this quick teaser and we'll make sure that we start to bake in some of the questions and thoughts that folks had in our longer debrief for everyone. So have a great day. Thank you.