 Bonjour. Merci. That's the extent of my French. Thank you all, and thank you for having me. And hopefully we can go from a little bit light-hearted to a little more serious topic, which is really around trust in AI. Our world is changing. AI is invading every aspect of it. Before I get into that, I have to give a shout out to a doc maintainer, some of my teams in the audience. So Brad Topol, you guys can tweet all this, right? Shout out to Brad Topol, Kubernetes doc maintainer, DE at IBM. Great guy, and he's helping to run the doc sprints at Kubernetes. So please give Brad a shout out. Thank you. So think about it, right? And this is a quote from Kevin Kelly. Kevin's a noted author, anybody who saw Minority Report. That was some of his work. And he's thought about AI, and it's really become very simple. There will be 10,000 new startups, and guess what? They're going to take something that we already do, and they're going to add AI technology to it. And you can read his book, The Inevitable, and see a little bit more on that. And it's anything, right? So a little light-hearted. You can look at, and this is really something that exists in Toronto. It's a real picture of something one of my team did. His daughter works in a dance studio. It's at night. The security guard is gone. Or off to the washroom, something like that. And now somebody needs to be able to get in and get to class. And they can just press the button, talk to an application. The application can verify who that person is, open the door for him and let him in. And they don't have to worry about where the security person has gone. After hours, they've got something to do. Done very simply. Voice recognition technology, text-to-speech to then go and play the response. And of course, make the door open. Very simple. Smart things happen, right? But we start to get more complex as we start using visual technologies as an example. And there's adversarial attacks that can exist as a result of just very simple things you can do to manipulate an image. You can insert noise. Not a lot of noise in this case, right? Giant panda becomes a monkey, right? Done just by tricking it with a little bit of noise. Because the computer is looking at the little pixels around the things, the elements that are inside of the image. It's not looking necessarily at the big thing. It can't make that sort of determination yet that we can. But deep neural networks are evolving. They're getting better. They're taking us down this path of helping us make decisions. And of course, you can then go and think about this in terms of your life. Well, simple things added to as noise into an image can then hurt my bank account, right? Somebody can very easily fool a machine. Can you really trust that check scanner that you have going on in the background? So as we get to more and more complex decisions that we have to go and make, it becomes really necessary to think about what's going on. Credit applications, admission to colleges, employment applications, your healthcare, whether if you've committed a crime, possibly how you're going to get sentenced. All these things are being assisted by AI technology. This can be kind of scary. Let me go back one here for a second. I clicked too fast. And what you have to do is look at, is the data been tampered with? Has the model been tampered with? Is there something inherently biased or unfair? We heard a little bit about that yesterday in some of the discussion about ethics. Can you look at that model, that black box, and understand it in any shape or fashion? Is what's being done tracked and accountable? So how do we solve these problems? Well, IBM has this long history of being out and available in open source, working in projects like Linux and others and other things before that, and doing it across a variety of different organizations. So that's our solution as well too. And there we go. So we've put out some open source projects. Adversarial robustness toolbox. It's up there on GitHub, you can go and play with it. It allows you to look at your models, inspect them, look at the data, make sure that nothing is being tampered with. There's a set of tools there that you can go and use. The AI fairness 360 toolbox that's there helps you with 70 different functions that are available to look at models and assess them, and 10 different ways to help you go and fix that model when it's broken in some way. And then, of course, explainability is something that we all have to deal with, right? When those models are being created these days, we can't necessarily understand how they were put together. They look like black boxes. We have to be able to probe them. We have to be able to understand whether the data is biased in some way. We have to be able to make corrections. And so looking at that black box and turning it into a model that we humans can understand, that regulators can understand, that consumers can understand is very important. The toolbox there will help you. So we apply these tools as well in our products. Watson OpenScale has this as part of its dashboard. It allows you to track and measure your AI outcomes all throughout the life cycle, throughout that whole pipeline. So one of the things you can do to get involved in this is to join the Linux Foundation AI committee, right? There's a principles working group that's looking at the ethics and the principles of how we all need to operate, as well as use cases. And they're going to help really take these tough problems and turn them into something that we can all work with. And of course, it's a worldwide group that's formed across North America, Europe, and Asia. And lastly, you can come join us at IBM at Coday. We did take a French term coder. So you, plus the work that we're doing in Coday, come join us, join these projects, be part of it. Thank you.