 Oh good morning everybody. First just a quick nod to the open power team. I was really excited to see what they had done there and the work that's going on. I'm going to talk a little bit about some AI things today, trust today in particular, but you know training models, moving data around, being able to attach accelerators, all those things are really important and you know the open Cappy, open MI stuff in particular is great and you know actually honestly I did work on the power instructions that they were a long time ago so I'm really happy to see that move forward. So the people know me usually thinking me either as that open source guy or the guy who does things with OpenJS or someone who is cloud oriented, but actually I have some other responsibilities at IBM too. The Center for Open Data and AI technology is one of my responsibilities and for those of you don't know what IBM joined this group a few years ago we actually helped found it. It's the partnership on AI and it's there to go create you know the best practices for using AI to actually champion ethical use of AI as well too. So I was one of the IBM leaders who helped to go and shepherd that forward and bring the group together and make that work. So this topic however is something that I think we're all starting to see in the press quite a bit right. Trusting AI, very difficult thing these days and why is that right? Well you know we've seen some of the examples with people are tampering with models right. When you tamper with it or tamper with the data to color something it can create a problem right. Is it fair? Does it have biases? Tremendous issues around race and gender and other things that can potentially impact how society views the use of these models right. Can anybody understand how it made its decision? You know you saw some examples earlier of deep learning and the path that a model can possibly take through its progression and how it got to where it is. And it's going beyond necessarily the level of which we humans can grok what's going on there right okay. And then of course is it accountable right. Throughout its life cycle can you figure out the decisions it made and then what to do about those. This is a hard one there's not really much out there yet and I'm not really going to talk much about that one but those are sort of the four pillars is as I look at decisions that are made by a computer for me they're going to impact my life. So in the news I think you know just to point this one back out you see an image there right and that image is what we all would see. However you know the computer sees something a little different right it's all math it's ones and zeros put together looking at the bit you know the little bit map around everything and trying to figure out you know what that is okay great we saw a little of that earlier right. But then if you go and attack that target you can actually have that same image we'll see it we'll look at it the human eye won't see anything wrong with it but it's now been attacked and the AI sees nothing. Now this is adversarial machine learning right and people can actually go and use this to attack models and come out with outcomes that you don't want to have. There's and I'm not going to really go into this one there's an adversarial robustness toolbox that IBM has done there's if you want to go to the URL it's art-demo.mybloomix.net and you can see this at work and you can see how to mitigate those attacks as well too. My favorite example on that is there's a cat it's a Siamese cat you can look at it you can attack it and when you attack it the AI decides it's an ambulance right so you know but to you that picture just looks like a cat. So this is something that's that's really a very serious problem that we have to make sure as we want to trust these models. Okay bias pretty famous thing out in the press as well too you can google this you can look it up north points compass system was engaged out there they thought they were helping with the process of figuring out you know what kind of sentencing should happen and looking at folks and building a model and hey great but it you know it flagged black defendants almost 2x as much as white defendants who already had had a previous criminal defense or offense I should say. So this is things that you know the press discovered by going in and doing investigative reporting right and people more and more and more are going to go start looking at these things as AI technologies are employed to basically say is this being done well as it being done fairly right bias is clearly a problem and it can creep into the data it can creep into the model like there's there's lots of things that can cause this to happen. So IBM how would we solve this let's do some open source stuff right that's what I do we've joined a lot of organizations we have this long history so there's a trusted AI life cycle that we have to go and engage so we've created some open source projects we've put them out there there's the adversarial robustness toolbox I just talked about that there's AI fairness and there's AI explainability and soon there'll be something coming for lineage as well too so let's just talk about fairness and and um understandability explainability so you know in in the fairness world we want to make sure that we get the biases out of the model so if you go out and take a look there's AIF360.mybloomix.net again you can get quite a bit more data on this that same compass example go click on it try it see what happens just simply reweighting that model just a little bit could take almost all of the bias out of that model of course there's a whole series of different algorithms that you can use metrics you can look at at least 10 I think it's more like 12 mitigation algorithms right now in over 70 fairness metrics that are in play here it's a python package explainability same thing okay um think about being out there and you're going to get alone right something that we all worry about right in our lives we want to finance that new boat or the new house or whatever and there's gonna loan officer who's gonna interact with that and the financial institution behind them they've got models right they look at a lot of data they look at you your ones and zeros and numbers and things and they crunch that and they have a machine learning model that gives you a score and basically decides whether you get that credit or not and there's many factors that might go into that how many times have you gone and applied for credit recently um have you ever been laid on payments within so many days etc they they look at a lot of different uh weightings and things to characteristics to come up with this so what we've seen and this is a project that we put out within the last couple of weeks is uh ai x 360 or our explainability model is that uh you can actually go in and use tools to help people explain how the ai made its decision looking at the tags explaining how this goes and and think about it there's multiple constituents in this case that I just talked about the loan case right there's certainly the end user you who wants to understand how the ai made his decision and it might be some specific things that you should get feedback on within that but then there's a loan officer who wants to see how you compare it to other people and making his decision as well as the scoreings that you had across various aspects of that model and then there's of course the data scientist who's trying to keep that thing up over time and manage a life cycle of it and understand how it's making its decisions over time how it's being improved changed etc so this is a toolbox that will help you go and do that all right so um we're not just putting these things out in open source these are projects that we're using internally as well um if you come to IBM and take a look at what's an open scale you'll see that these capabilities are things that we've gone and built in to the dashboard the insights dashboard that helps us track and measure outcomes in ai and then of course um what's become much more important these days is managing regulatory compliance so you know I I got a chance the other day to meet with IBM's officer who's in charge of these things who's using the tools that we've created and working with the various teams who have these models to make sure that they pass the test the ISO test the other things that we have to to of course maintain regulatory compliance right so decisions get made they get made but a lot more trust we can track our models you too can take advantage of these things they're out in the open you can come you can participate um it's it's part of product as well now too so being the open source guy how else can we go get more involved in this what can i be i'm going to do well we've announced that we're joining the linux foundation ai group lf ai right they were formed a year or so ago I guess um we've been informally part of it for a while working out in some of the work groups like the ml workflow um but we want to take this trust topic into the discussion that's going on there so you're going to see us very active in that I invite you all as well to come out and join the group we think it's a great group they've got some good projects that are going on within it and I really thank the the linux foundation for for taking on this topic with us as well too so trust and transparency you'll see us doing a lot more with it there okay the other place to come and join in you want to participate in the code understand the projects be part of it come to that coday.org site that's an easy way of finding it or you can find the code on github just go out do a search and pick it up and we're happy to go and participate with you so that was it I just want to say thank you and hope you all have a an excellent conference here this this week