 Hi, I'm Peter Burris and welcome to another CUBE conversation from our beautiful studios in Palo Alto. As we do with every CUBE conversation, we want to find a great topic and a smart person to talk about it, and that's what we've got today. What's the topic? We're going to be talking about new classes of AI that are capable of addressing some of the more complex white collar worker work that gets done. And to have that conversation, we've got Amy Guarino, who's a COO of Kendi here on theCUBE with us today. Amy, welcome to theCUBE. Thank you very much, Peter. So, tell us a little bit about yourself first. Sure, so I grew up at IBM in sales and sales management and then started doing startups. Most recently, I spent eight years at Marketo and then just after the VISTA acquisition joined Kendi. So that was two years ago. It was a nine person science and research kind of an organization and we've done a few things to get the group in order and we now have 31 folks and really focus on explainable AI. Okay, so explainable AI, what is that? So what's really interesting is that AI has had a lot of success specifically around deep learning, neural nets. And one of the challenges with that approach is that it is a black box. You can't understand what the outcome was or is. And what's really interesting, I was with a customer yesterday and they were telling me that they were using deep learning around water treatment plants. But they got a lot of feedback that if I'm gonna be drinking water, you need to explain to me what it is that you're doing to it and why. And they were like, well, holy cow, we can't. And they said, that's a problem. And that's why they came to us because they wanted to learn about how you could do explainable type of AI. And the approach that we take really focuses on language and how to analyze that language but doing it in a way where you're able to trace back to the actual raw data source to make sure that it really is correct. So we think about it as more augmenting humans versus replacing humans. Well, let me see if I can break that down because I think of AI, at least things that are pertinent to AI in a couple of different ways in kind of a mix. To what degree is something programmatic and therefore you can find or you can discover patterns in how the program gets operate or operates so that you can improve it. But there's also social elements to any system that has to happen. And the black box is good for very programmatic, relatively structured with the problem spaces, relatively well-defined, relatively well-articulated has a very specific role in a broader context of things. But when we start talking about activities that have a significant social component where human beings are a major participant or a major source of value in the activity set that's being performed, you can't count on a black box because humans won't adopt it. So when you say discoverable AI, was that it? Explainable AI. Explainable AI, is it really AI for those use cases where human beings are an essential part of the value creation, value change? I think that's a great way to think about it. We initially thought it was going to be most applicable in regulated industries, where you had a requirement to explain it. But what we found is it absolutely works there, but it also is very relevant for any kind of decisions where humans are allocating resources or doing something and they have to explain why. So the explainable AI means that the AI can be more easily adopted by human-centered activities. Absolutely. So we think about AI, we think about deep learning, we think about machine learning. I mean, text automatically introduces natural language processing. What elements are you combining to make the explainable AI of Kendi work? So what we do is we actually ingest documents, PDFs, word documents, any kind of text. We then apply natural language processing to that, to be able to parse out the entities, the terms, all of the concepts. We apply machine learning so that we can extract what we call a proto-ontology or structure from that. So you don't have to do a lot of work up front building out a taxonomy. And therefore, we have a benefit of being able to go from one domain to another very quickly. And then we take all of the- Which by the way, black box AI does not do well. That's correct. That's absolutely correct. We address that deficiency as well. And then we take that output and we put it in what we call cognitive memory, which is a knowledge graph. It's a proprietary knowledge graph that allows us then to be able to search the information on there from a context perspective. So a cognitive type of search. We can also apply certain preset, sort of filters for different applications. So one of the areas where we focus on is around pharmaceutical. And they're very interested in understanding and analyzing a lot of the text associated with reports around drug discovery. And to be able to understand where there's data integrity and where there's not. And where the process has been followed right now. Yes, absolutely. And so to be able to apply those preset filters against that across a really large data set and be able to highlight and get to a smaller subset that the scientists can dig into and really understand where there are potential issues and figure out how to mitigate those issues is critical. So let me see if I can generalize. A explainable AI being applied in a domain like pharmaceutical that has a common set of audit features to it, in terms of the methods used for drug discovery, drug authorization, and OK. And then utilizing that with the drug discovery people who are responsible for actually validating that the process is being followed appropriately to limit the amount of manual work that goes into the audit process. Have I got that right? Yes, absolutely. But by a huge factor. It's like a hundred times, yes. Okay, well that works. Yeah, say it does work. So we're talking about being able to, we can use that hundred times to reduce number of people or to increase the volume of possible candidates for drug commercialization. Absolutely right, absolutely right. So what other domains do you expect Candid to be applied to? It's a very broad capability. It's any kind of work where you're reading lots of texts. Today we focus in terms of the pharma opportunities. We have a lot of manufacturing folks that are looking at ways to be able to look at and review sort of tribal knowledge that exists within a manufacturing environment. As people retire, there's a lot of information that doesn't quite get passed down and they're trying to figure out ways to get that information and also make it more easily searchable. Can you look at cobalt code? We've talked about it, we've talked about it. And we do that and then also in the government. We do a lot of work. All right, so, and you know, it's interesting that you started talking about pharmaceutical. Most firms like yours work their way up to pharmaceutical. Because pharmaceutical is, you know, the FDA is governed by rules where liabilities actually are associated with software. Most domains doesn't have to worry about that. So you guys are starting with the hardest problems with the greatest potential commercial risk and you're working your way into others. Well, I think it's because it's explainable. I think that's the advantage that we have and so we are able then to go back and provide that provenance to be able to support how we got there. And so it makes a big difference. Okay, so what's going to happen with Kendi in 2019? We're going to continue to grow and really expand, particularly on the commercial side of the business and go beyond pharmaceutical into manufacturing, maybe even a little for the financial services, but really make our customers successful, show how successful we can be and that's going to be our marketing capability to be able to help share this with the rest of the world. Yeah, figure out co-bolts, you can help my CIO guys. There's a lot of people like me retiring. All right, Amy Guarino, COO of Kendi, talking about explainable AI and the need for new classes of tools that can augment human activity, make it more productive. Amy, thanks very much for being on theCUBE. Thanks, Peter, it's been great. Once again, I'm Peter Burris. Thanks very much for watching this CUBE conversation. Until next time.