 from the Corinium Chief Analytics Officer Conference, Spring, San Francisco. It's theCUBE. Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're at the Corinium Chief Analytics Officer Symposium or Summit in San Francisco. The Park 55 Hotel we came up here last year. It's a really small event, very intimate, but a lot of practitioners sharing best practices, and we're excited to have a really data-driven company represented. Scott Zoldy, Chief Analytics Officer from FICO. Scott, great to see you. It's great to be here. Thanks, Jeff. Absolutely. So before we jump into it, I was just kind of curious. One of the things that comes up all the time when we do Chief Data Officer, and there's this whole structuring of how do people integrate data organizationally? Does it report to the CIO, the CEO? So how have you guys done it? Where do you report into FICO? So at FICO, when we work with data, it's generally going up through our CIO, but as part of that we have both the Chief Analytics Officer and the Chief Technology Officer that are also part of that responsibility of ensuring that we organize the data correctly. We have the proper governance in place, right? And the proper sort of concerns around privacy and security in place. Right. So you guys have been in the data business forever. I mean, data is your business. So when you hear all this talk about digital transformation and becoming more data-driven as a company, how does that impact the company like FICO? You guys have been doing this forever. What kind of opportunities are there to take analytics to the next level? For us, I think it's really exciting. So as you're right, we've been at it for 60 years, right? And analytics is at the core of our business and operationalizing the data and around bringing better analytics into play. And now there's this new term, you know, operationalizing analytics. And so as we look at digital, we look at all the different types of data that are available to decisions and all the computation power that we have available today. It's really exciting now to see the types of decisions that can be made with all the data and different types of analytics that are available today. Right. So what are some of those nuanced decisions? Because, you know, from the outside world looking in, we see kind of binary decisions. You know, either I get approved for the card or not, or I get the unfortunate, you know, your card didn't get through. We had a, you know, a fraud event. I got a call, please turn my card back on. Seems very binary. So as you get beyond the really simple binary, what are some of the things that you guys have been able to do with the business, having a much more obviously nuanced and rich set of data from which to work? So one of the things that we focus on is really around having a profile of each and every customer so we can make a better behavioral decision. So we're trying to understand behavior ultimately and that behavior can be manifested in terms of making a fraud decision or a credit decision. But it's really around personalized analytics, essentially analytics of one, that allows us to understand that customer very, very well to make a decision around, you know, what is the next sort of opportunity from a business perspective or attention perspective, right? Or, you know, improving that customer experience. Right. And then how much of it as you're driving, you talked about operationalizing this. So there's operationalizing it inside the computers and the machines that are, you know, making judgments and scoring things and passing out decisions versus more the human factor, the human touch. How do you divide which goes where and how do you prioritize so that more people get more data from which to work with and make decisions versus just the ones that are driven inside of an algorithm, inside of a machine? Yeah, it's a great point because a lot of times, you know, organizations want to apply analytics to the data that they have, but we haven't given a thought to the entire operation of that. So we generally look at four parts. One is around data. You know, what is the data we need to make a decision because decisions always come first, the business decisions. Where is that data? How do we gather it and how can it get available? Next stage, what are the analytics that we want to apply and that involves, you know, the time that we need to make a decision and how to make that decision over time and then comes the people part, right? What is the process to work with that score? Record the use of, let's say, an analytic. What was the outcome? Was it more positive or based on using that analytic? And incorporating that back to make a change to the business over time, make actions over time in terms of improving that process and that's a continual sort of process that you have to have when you operationalize analytics. Otherwise, this can be a one-off sort of analytic adventure but not part of the core business. Right, and you don't want that. And what about the other data, you know, third-party data that you've brought in that isn't kind of part of your guys core? Obviously, you have a huge corpus of your own internal data and through your partner financial institutions but have you started to pull in more kind of third-party data, you know, social data, other types of things to help you build that behavioral model? It kind of depends on the business that we're in and the region that we're in. Some regions, you know, for example, outside of the United States are taking much more advantage of social data and social media and even mobile data make, let's say, credit decisions but we generally are finding that most organizations aren't even leveraging that they already have in-house appropriately and to the maximum extent and so that's usually where our focus is. Right, right. So the shift gears about the inside and it's an interesting term, explainable AI. I've never heard that phrase. So what exactly when you guys talk about explainable AI what does that mean? Yeah, so machine learning is kind of a very, very hot topic today and it's one that is focused on development of machine learning models that learn relationships and data and it means that you can leverage algorithms to make decisions based on collecting all this information. Now the challenge is that these algorithms are much more intelligent than a human being, they're superhuman but generally they're very difficult to understand how they made the decision and how they came up with a score. So explainable AI is around deconstructing and analyzing that model so we can provide examples and reasons for why the model scored the way it did and that's actually paramount because today we need to provide explanations as part of regulatory concerns around the use of these models and so it's a very core part of the fact that as we operationalize analytics and we use things like machine learning and artificial intelligence that explainability, the ability to say why did this model score me this way is at front and center so we can have that dialogue with a customer and they can understand the reasons and maybe improve the outcome of the future. Right. And it was that driven primarily by regulations or because it just makes sense to be able to pull back the onion. On the other hand, as you said, the way machines learn and the way machines operate is very different than the way humans calculate so maybe I don't know if there's just some stuff in there that's just not going to make sense to a person so how do you kind of square that circle? So for us, our journey in explainable AI started in the early 90s so it's always in core to our business because as you say, it makes common sense that you need to be able to explain that score and if you're going to have a conversation with a customer. You know, since that time, machine learning has become much more mainstream and there's over 2,000 startup companies today all trying to apply machine learning and AI and that's where regulation is coming in because in the early days we used explainable AI to make sure we understood what the model did, how to explain it to our governance teams, how to explain it to our customers and the customers explain it to their clients, right? Today it's around having regulation to make sure that machine learning and artificial intelligence is used responsibly in business. Yeah, it's pretty amazing and that's why I think we hear so much about augmented intelligence as opposed to artificial intelligence. There's nothing artificial about it, it's very different but it really is trying to add to provide a little bit more data, a little bit more structure, more context to people that are trying to make decisions. And that's critically important because very often the AI or machine learning will make a decision differently than we will so it can add some level of insight to us but we always need that human factor in there to kind of validate the reasons, the explanations and then make sure that we have that kind of human judgment that's running alongside the machine learning model. So I can't believe we're going to stay here and say that it's whatever it is, May 15th today that I think here's almost halfway over. But what are some of your priorities for the balance of the year? What are some of the things you are working on as you look forward obviously? FICO is a big data-driven company, you guys have a ton of data, you're in a ton of transactions so you've got kind of a front edge of this whole process. What are you looking at? What are some of your short-term priorities, mid-term priorities as you move through the balance of the year into next year? So number one is around explainable AI, right? You're really helping organizations get that ability to explain their models. We're also focused very much around bringing more of the unsupervised analytic technologies to the market. So very often when you build a model you have a set of data and you have a set of outcomes and you train that model and you have a model that makes prediction. But more and more we have parts of our businesses today that we're unsupervised analytic models are much more important in areas like- So what does that mean like unsupervised analytic model? So essentially what it means is we're trying to look for patterns that are not normal unlike any other customers. So if you think about like a money-launder there's going to be very few people that will behave like a money-launder or an insider or something along those lines. And so by building really really good models of predicting normal behavior any deviation or a misprediction from that model could point to something that's very abnormal and something that should be investigated. And very often we use those in areas of cybersecurity crimes like money laundering, insider fraud and areas like that where you're not going to have a lot of outcome data of data to train on but you need to still make good decisions. Wow, which is really hard for a computer, right? That's the opposite of the types of problems but they like a lot of reps. Correct, so that's why you know the focus is on understanding good behavior really really well. And I think different than what it thinks is good could be potentially bad. All right Scott, we'll keep track of all of our scores. We all depend on it. We all do. Thanks for taking a few minutes out of your day. Thank you Jeff, appreciate it. All right, he's Scott, I'm Jeff. You are watching theCUBE from San Francisco. Thanks for watching.