 Hello everyone, my name is Kush Varshney and I'm a distinguished research staff member at the IPM TJ Watson Research Center, which is in York Town Heights, New York, north of New York City. And yeah, just turn on the number. Yeah, TJ Watson Research Center, which is in York Town Heights, New York, north of New York City. And yeah, just turn on the number. Yeah, my apologies. Yeah, we're doing this live and there was a bit of an echo. So yeah, it's really great to be here. And one of the things that I do at IPM research is lead our kind of research on trustworthy AI. And what do we mean by that trustworthy AI? So the first question to ask when I think about trustworthy AI is what does it mean for another person to be trustworthy? And it's only when we can answer that that we can start thinking about what it means for machines and AI systems to be trustworthy. So the first thing that I want from someone else for them to be trustworthy is that they're competent at what they're doing, that they do what they say. The second thing is that they're competent also in different conditions in different settings and so forth that they're reliable. The third is that I can go back and forth with them that I can communicate with them. And the fourth thing is that they have some goals that are beyond just themselves that they're selfless in some capacity. And those are exactly the same things that we want from trustworthy AI systems. We want them to be accurate. We want them to be reliable and robust and fair. We want them to be explainable and transparent and designed in such ways that we can instruct them on our values. And we want them to be empowering so that people from all sorts of groups and situations are able to use AI to meet their goals. And just to tell you a little bit about myself and how I got into this field, I think it makes sense to point out that I actually have a lot of privileges both growing up and working for a large company like IBM. And when you are in a privileged sort of position, it's very easy to kind of go into this mode that the only things I care about are the things that other powerful people or other privileged people are caring about. But that's a very sort of, I mean, this kind of lack of selflessness view on things. So it's in itself kind of a lack of trustworthiness if you just think about your own goals. So when I think about trustworthy AI, I think of it as a way to really empower everyone to uplift those that are the most vulnerable to bring forward the viewpoints and the goals of those that are marginalized. And I think it's very important when we're working on technology to bring in that perspective because we can be in our own filter bubbles, we can kind of see things only through our own lenses. And that's not what we should be doing. So if you're a business, if you're a company, there's many reasons why you might want to care about trustworthy AI. And some of those are kind of things that are more externally directed. So there's an increase in regulation that's happening for AI. There's, I mean, the general sense that there's potentials for brand reputation to be hurt and so forth. But what I think is, I mean, companies should actually be self-motivated, that they should be thinking about the fact that AI is really changing the fabric of society. And everything that we do is going to come back to us at some point, whether it's indirectly, directly, or maybe in a future generation of humanity. And right now we are at a point where this completely new technology, this completely new general-purpose technology is finding a place and we have to develop it in ways that make sense, that are uplifting and that are not actually exacerbating inequalities. So that's kind of my view on things. And you heard from Kelly Combs yesterday and from KPMG. And the two of us are going to be doing a longer webinar in which we go through a lot of the technical details, the business motivations of trustworthy AI, of AI governance, and so forth. And that's going to be next Wednesday, May 19th. So at noon Eastern time where I am, but it's going to be available all over the world. So six o'clock for my friends in Paris, seven o'clock in the evening for my friends in Nairobi, 9.30 in New Delhi, and midnight for Thursday morning in Manila. So I hope you all join us for that. And yeah, if there's any questions that people want to post in the chat of YouTube, I'm happy to talk more about anything on trustworthy AI. So I don't see any questions coming in yet. So let me, I mean, say a few more things on kind of my own sort of views on what this is, right? So when we talk about trustworthiness, it is the characteristics of the AI system that we're talking about. So, so yeah, I mean, it's things that I should be working on should be developing that go beyond just the predictive accuracy. That's kind of the easy thing to do. It's something that developers, researchers have been programmed to do in many ways for many years. And once you start getting all these other considerations beyond accuracy, you really have to slow down and kind of think about think about things holistically. And when we talk about adversarial robustness, when we talk about algorithmic fairness, when we talk about robustness or distribution shift, when we talk about explainability, transparency, and how to govern these things, it's really not a question of can we do them? We can if we put our minds to them to doing so. But it's really about deciding that yes, that this needs to happen. So there's a question in the chat. Can we use AI in calculating the sum of money needed to be done by some institution or local or central government? Yeah, so yeah, I think I mean, there's definitely back with the envelope of computations that one can do in terms of what it takes to implement AI in the real world. There's a lot of actually open source software out there these days. That's kind of best of breed for doing a lot of machine learning and AI stuff to get started. And then there's enterprise grade solutions that can be built on top of those or one can transition over. Some of the work that I do is actually related to creating open source software on these topics of trustworthy AI. So our department and our strategy within IBM Research has released toolkits such as AI fairness 360, AI explainability 360, and the adversarial robustness toolbox. And all of those are available for people to just take and use into their natural workflows. And that's a great way to get started. And then once you feel that you're in a good situation and have seen that AI has a role to play in your application or application domain, then you can then move on to some of the IBM products or around products from other organizations. And yeah, so clarification on that. So in the infrastructure, yeah, I mean, there's many different types of infrastructure that are needed for AI. There's data, there's computation, and then there's the human skill, the human investment. So all of those come in various forms. We need to have all three, I think, to really bring AI to practice. And it really depends on kind of where your starting point is to kind of make these things happen. So another question in the chat is around is there a way to measure trust? And measuring trust is not a single number, in my opinion, that you can just come up with and say that this system is 98% trustworthy or something like that. So since there are many dimensions, many different sort of ways in which we discuss trustworthiness, even those four attributes that I talked about, some of them are more quantifiable than others. And there are metrics for, let's say, fairness for robustness and so forth, even for explainability, there's some proxy metrics for accuracy, there certainly are. But there's no one best metric even for different use cases and different applications. So it turns out even for fairness, there's different worldviews and different reasons why there are many different possible fairness metrics that make sense in different settings. And so in my opinion, we shouldn't be aiming for a single number of what it means to be trustworthy, but we should be computing several different factors for AI systems, some that are quantitative, but also reporting some qualitative things around intended uses and so forth. And all of those should be reported transparently in a fact sheet that comes along with an AI system. And it reports all of these different factors. So in terms of GPT-3, so that's a large language model that has and it's a part of a family of many other large language models that have been recently created. And they ingest a lot of different text data in order to have an understanding of what language is. And there's a lot of good things that can happen from these sort of technologies, but there's also a lot of risks, especially because the data that they draw from can include a lot of biases that can be from all sorts of weird and dark places that this text data comes from. So one thing that IBM is working on in IBM research is, I mean, we have a strong language capability and we continue to work on it. We tend to focus on taking these language models and other natural language processing technologies and making them more domain specific. And specifically for different industries and sectors because general language is great for a lot of tasks, but we do feel that there needs to be a lot more industry specification and domain specification as well. So that's one of our main directions along with trustworthy AI is making language more accessible for different settings. And is trusting AI different from trusting a human? I would say, in many respects, no. I think there's many commonalities because we as people are wanting kind of the same attributes that make a system or a person worthy of the trust. But there is a distinction between being worthy of trust and actually being trusted. And that relates to the fact that I can have all sorts of preconceived notions and biases myself that prevent me from actually trusting something that's worthy of trust and vice versa. So when a machine is in play, we have all sorts of both positive and negative impressions already of machines. And because of that, there is somewhat of a difference between what it takes to trust a machine and a human, but we should be aiming to make our systems as worthy of trust as we can. Any other questions coming through? No more questions, it seems like. So just to repeat, this was kind of a way for you guys to get a brief introduction of trustworthy AI. So Kylie Combs from KPMG and I will be doing a webinar next week to go into more depth, a lot more sort of prepared remarks and so forth, plus more Q&A at that time. So last question, I guess, before we wrap up, so what are the typical blind spots for organizations that end up creating biases and drifts despite the fact that they want to be responsible for AI deployment. So typical blind spots are mainly kind of just not asking the questions themselves that what are the potential biases that might be included in my data or among the developers who are working on these technologies. So there's many sorts of social biases, sampling biases, temporal biases, even in data preparation, there can be the introduction of biases that kind of affect the eventual performance of machine learning models and AI. So just spending the time not taking shortcuts and kind of working through and cataloging every sort of potential bias there might be and then figuring out mitigation strategies is the message I would say. And then once you can do that, I think we have a lot of the tools to actually make progress. And last question, what is your data, the total internet? So I guess this relates to the large language models question earlier. So GPT-3 is not from IBM but from others. But yeah, I mean, any of the large language models that are being trained these days do not really rely on curated data, they just bring in any text that they can find from the internet or elsewhere. So that is a big risk for these large language models that exist. And the last, another question from Madhavan. Yeah, so he's asking about my book that I'm writing, so it's called Trust in Machine Learning. It's made out of the chapters are already available online. And the whole book is really, I mean, working on describing the mindset that business people and developers and practicing data scientists need to have to kind of have trustworthy AI systems including everything that I talked about today. So yeah, it is a resource that it can be a more in that sort of discussion that each chapter takes a particular industry use case into account and that kind of works through a different topic of trustworthy AI through them. So yeah, maybe I think we're kind of at the end of time. So we can pause here and then I hope to see many of you next week as well. So thank you.