 Welcome to the AI for Good Global Summit here in Geneva on day three. I'm joined now from California, which is here actually in presence. It's Anna Bethke, I hope I got that correct. You're with Intel and you're head of AI social good. You took part in a couple of panels here. What was your message? The biggest message is that anyone can use the capabilities that we already have. Whether it's technology, artificial intelligence software, in order to help leverage this to achieve their missions. So let AI be a force multiplier for their ends. And as the head of AI for social good at Intel, one of my big things is to help educate people on what AI can do and help them learn how to use these tools. So tomorrow I'll be doing hands-on workshop. And it's been lovely just hearing all the different problems that people are facing and sharing some of the ones that we've been working on ourselves. In concrete terms, what are you working on? So there's a lot of different projects. Some of them are on the AI for sort of conservation space. So we've been working with a non-profit resolve to do a AI enabled camera trap. So basically camera traps are super noisy. We have helped them embed one of our vision processing units into the camera. So every time it takes a picture it's able to see if it's just leaves that are moving or if there's a person or animal that is in the frame and then only send those relevant images to the park rangers or the conservation experts. So that's something, another project that we've been doing is with a company called Hubox. And so they've created a wheelchair robot basically. So this little piece of technology on a wheelchair that's able to use our 3D real sense camera as well as some of our other technology and do facial gesture recognition. So this enables somebody who is wheelchair bound to control their own wheelchair using whichever facial gestures are most easy for them to access. I mean, few companies know more about collection of big data than Intel. You know all the concerns that have been raised about AI and who owns that big data. Yeah, no and it's definitely a concern and one that we are very adamant about looking at. Some of the research that we've been doing is trying to look at ways to help people retain the privacy of their own data. So aspects like federated learning, homomorphic encryption. We just released an open source package for homomorphic encryption and basically the idea is that you're able to keep your data in an encrypted fashion but still do your artificial intelligence types of algorithms on top of that and get the right answers in an encrypted way and then just decrypt those results. And so this really helps the medical field in particular. Federated learning if you add that on top of it, basically what federated learning lets you do is have all your data stored in different databases. So each hospital can have their own data in-house or any type of service that you can think of. And then you have one central repository which is able to say, okay, I'm going to make an algorithm and so I'm going to just sort of touch your data but I'm not going to take it out of your own store. So that really increases the privacy. A lot of nonprofit groups over here and in the States have said they want that data lake. Everyone has their own little data lakes. They want that all to be kind of shared and open sourced. Is Intel willing to do that? Yes, we open source a lot of different data sets and we definitely think that open source is the answer. Whether it's in terms of data, software, et cetera, so that's something that we always strive for by making certain that people's privacy is retained. It's important to have both mechanisms there and the more data it's often better for these algorithms in terms of getting a more diverse pool, the algorithms that we're using often are very data hungry especially when it comes to deep learning. But yeah, it's not to be at the cost of privacy for sure. It's your first time here to this summit. Your thoughts? I love it. I've been looking forward to this for about a year in all honesty. I missed the last one by about a week. I learned about it a little bit too late and so it's lovely hearing all the different projects that people are working on, trying to help other individuals with getting access to resources, helping them learn what are the capabilities of AI and demystifying some of the fears around it as well. Well that was Anna Bethke from Intel over here for the first time, sharing her ideas on what social good is when it comes to AI. Thank you very much. Thank you as well.