 Welcome to the AI for Good Global Summit 2018. I'm delighted to be joined by Joelle Castex, founding member of the Zero Abuse Project. Thank you for joining us. It's my pleasure. So tell us about the Zero Abuse Project. It's all about protecting children from sexual exploitation. Exactly. That's the goal of our nonprofit, is to protect children from sexual abuse and exploitation globally, no matter the form of that exploitation. So tell us how AI fits into this project, and your mission. Well, it's a funny story, because we never thought AI would be a part of our mission. And for years, I personally, and then as a part of the Zero Abuse Project, was working to figure out how to make organizations and institutions safer for kids. And our number one goal was to build a database, build something that we could, when we needed information, we could get it quickly. And we met a contact here that we had known through another friend of ours, Neil Sahoda. And he said, well, I think what you need is a little more than a database. And we were introduced to AI, and the possibilities it could give us. And from there, we began to build our project, which is called Project G. And what Project G is, is a tool, and it identifies the risk factors of predatory behaviors, not only of the predators who prey on children, but those associated with the cover-up of that sexual exploitation. And it's an amazing, fascinating tool, because what it does is it shows us what a predator looks like. We train the tool about what the various behaviors are. And our first example that we're using is that of the Catholic Church, because they have the best, they simply have the best data. It goes back the furthest. We know the pattern's the best. And it's very solid data that we can teach the tool to identify, because we know who many of the confirmed predators are. And we can also look at it and say, well, there are many, the Church has told us that they have certain numbers, but there are gaps in those numbers. And we can use the tool to find names to fill in those gaps. And so we're very excited about what we're gonna be able to do with this, and to be able to use AI to find patterns that we have never thought possible to protect more and more children from abuse within institutions. And it's not just about child abusers, but the people who cover up abuse as well. Exactly. I'm a survivor of child sexual abuse within the Catholic Church, and I spent the past 15 years working to help other survivors. And what I have found for most victims, it isn't the actual abuse that causes the pain. It's the cover-up. And when you think about it, what is the deeper crime? A child sex predator is a very, very broken person that needs to be put in jail and be put behind bars. But what is a greater evil? The person who covers it up, the person who either stands by and knows it's happening and lets it continue, or the person who knows it's happening and facilitates its continuation. And so it was very important for us as a part of this AI project, to not only identify the patterns of the actual predators, but those who cover up the abuse. And what that will do is it will allow us to take our first data set, which will be the Catholic Church, where we have the best data, and then take our tool and put it other places, such as UN refugee camps, we can cooperate with UN on that, UN peacekeeping forces, the Boy Scouts of America, the US Olympic team. These are all institutions that have had horrible histories of sexual abuse. And not only will we be able to help them point out the risk factors associated with the predators, but also the risk factors associated with those who cover up the abuse to ensure that not only, if they pull out the predator, that the cycle will stop because we'll also pull out the people who cover up the abuse. You alluded earlier to the fact that you met one of your collaborators, he and Geneva at one of our summits, is that right? Yes, you know, when we first thought about this kind of project, you know, I just wanted a database. I had no idea what the possibilities of AI were. And so one of our collaborators here, Neil Sahoda, is one of the guys who brought us in and said, hey, AI is one of, is a great possibility for you. Neil's with IBM and we're working very hard with the IBM Watson for Good program because as a nonprofit, it gives us a lot of ability to see what we can do because our goal, our number one goal is not profit, it's to make organizations and institutions safer for children. Because, you know, when you have a child, my son is 11, from the time he's five, you take him to an institution, whether it be a school, a preschool, a church or whatever, in the hopes that they will keep him safe. That's an implicit agreement that you have. And our hope is to make sure that each of those institutions is safer. Well, thank you very much and thanks for the work you are conducting amongst those organizations. Thank you. Thank you, my pleasure.