 Live from Times Square in New York City, it's theCUBE, covering IBM's Change the Game, winning with AI, brought to you by IBM. Hi everybody, welcome back to The Big Apple. My name is Dave Vellante. We're here in the theater district at the Western Hotel covering a special CUBE event. IBM's got a big event today and tonight. If we could pan here to this pop-up, Change the Game, winning with AI. So IBM has got an event here at the West End that tied at Terminal 5, which is right up the West Side Highway. Go to ibm.com slash win with AI, register. You can watch it online or if you're in the city, come down and see us, we'll be there. IBM, a bunch of customers will be there. We had Rob Thomas on earlier. He's kind of the host of the event. IBM does these events periodically throughout the year. They gather customers, they put forth some thought leadership, talk about some hard news. So we're very excited to have John Thomas here, the distinguished engineer and director of IBM Analytics, longtime CUBE alum. Great to see you again, John. It's the same here Dave, great to have you. So we just heard a great case study with Niagara bottling around the data science elite team. That's something that you've been involved in and we're going to get into that. But give us the update since we last talked. What have you been up to? Sure, sure. So we're living and breathing data science these days. So the data science elite team, we are a team of practitioners. We actually work collaboratively with clients. And I stress on the word collaboratively because we're not there to just go do some work for a client. We actually sit down, expect the client to put their team to work with our team and we build AI solutions together, scope use cases, but sort of expose them to expertise, tools, techniques, and do this together, right? And we've been very busy. I can tell you that. There's been a lot of travel around the world, a lot of interest in the program and engagements that bring us very interesting use cases. Use cases that you would expect to see. Use cases that are, hmm, I had not thought of a use case like that. But it's been an interesting journey in the last six, eight months now. And these are pretty small agile teams. Sometimes people use tiger teams. I mean, two to three pizza teams, right? And my understanding is you bring some number of resources that's called two, three data scientists and the customer matches that resource. Exactly. That's the prerequisite. That is a prerequisite because we're not there to just do the work for the client. We want to do this in a collaborative fashion, right? So the customer's data science team is learning from us. We are working with them hand in hand to build a solution out. And that's got to resonate well with customers. I mean, so often in the services business it's like kind of customers will say, well, I don't want to have to keep going back to a company to get these services. Teach me how to fish. And that is exactly what I want. Exactly, I was going to use that phrase. That's exactly what we do. That's exactly what we do. So at the end of the two or three month period when IBM leaves, my team leaves, the client, the customer knows what the tools are, what the techniques are, what to watch out for, what are success criteria. They have a good handle of that. So we heard about the Niagara bottling use case which was a pretty narrow. How do we optimize the use of the plastic wrapping? Save some money there, but at the same time maintain stability. Very, quite a narrow use case. What are some of the other use cases? Yeah, that's a very, like you said, a narrow one, but there are some use cases that span industries that cut across different domains. I think I may have mentioned this in one of our previous discussions, Dave. Customer interactions, trying to improve customer interactions is something that cuts across industry, right? Now that can be across different channels. One of the most prominent channels is the call center. I think we have talked about this previously. Like, I hate calling into a call center because I don't know what kind of support I'm gonna get, but what if you could equip the call center agents to provide consistent service to the caller and handle the calls in the best appropriate way? Reducing costs on the business side because call handling is expensive and eventually lead up to, can I even avoid the call through insights on why the call is coming in in the first place? So this use case cuts across industry. Any enterprise that has got a call center is doing this, right? So we are looking at, can we apply machine learning techniques to understand dominant topics in the conversation? Once we understand, these have to be unsupervised techniques. Once we understand dominant topics in the conversation, can we drill into that and understand what are the intents, and does the intent change as the conversation progress? So I'm calling someone, it starts off with pleasantries, then goes into weather, how are the kids doing, complain about life in general, but then you get to something of substance why the person was calling in the first place, and then you may think that is the intent of the conversation, but you find that as the conversation progresses, the intent might actually change and can you understand that real time? Can you understand the reasons behind the call so that you could take proactive steps to maybe avoid the call coming in the first place? This use case, Dave, we are seeing so much interest in this use case because call centers are a big cost to most enterprises. Let's double down on that because I want to understand this, so you basically do, so every time you call a call center, this call may be recorded, you know, for a call, a service. So you're recording the calls, maybe using NLP to transcribe those calls? So that's what, NLP is just the first step. So you're absolutely right, and the calls come in, there's already call recording systems in place, we're not getting into that space, right? So call recording systems record the voice calls. So in an offline batch mode, you can take these millions of calls, pass it through a speech-to-text mechanism which produces a text equivalent of the voice recordings. Then what we do is we apply unsupervised machine learning and clustering and topic modeling techniques against it to understand what are the dominant topics in these conversations. You do kind of an entity extraction of those topics. Exactly, exactly. Then we find what is the most relevant, what are the relevant ones, what is the relevancy of topics in a particular conversation? That's not enough, that is just step two, if you will. Then you have to, we build what is called an intent hierarchy. So this is, at top most level will be, let's say, payments. The call is about payments. But what about payments, right? Is it an intent to make a late payment? Or is the intent to avoid the payment or contest a payment? Or is the intent to structure a different payment mechanism? So can you get down to that level of detail? Then comes a further level of detail which is the reason that is tied to this intent. What is the reason for a late payment? Is it a job loss, a job change? Is it because they are just not happy with the charges that have come in? What is the reason? And the reason can be pretty complex, right? It may not be in the immediate vicinity of the snippet of conversation itself. So you gotta go find out what the reason is and see if you can match it to this particular intent. So multiple steps off the journey and eventually what we wanna do is, so we do all of this in an offline batch mode and you're building a series of classifiers, a set of classifiers, but eventually we wanna get this to real-time action. So think of this. If you have machine learning models, supervised models that can predict the intent, the reasons, et cetera, you can have them deployed, operationalize them. So when a call comes in real-time, you can stream it in real-time, do the speech-to-text, you can do this, pass it to the supervised models that have been deployed and the model fires and comes back and says, this is the intent, take some action or guide the agent to take some action real-time. Based on some automated discussion, right? So tell me what you're calling about. Right. That kind of thing. Is that right? So it's probably even gone past, tell me what you're calling about. So it could be the conversation has begun to get into, you know, I'm going through a tough time. My spouse had a job change. You know, that is itself an indicator of some of the reasons. And can that be used to prompt the CSR to take some action? It's appropriate to the conversation. So I'm not talking to a machine at first. I'm talking to a human. No, no. This is still talking to you. In real-time feedback to that human is a good example of human augmentation. I wanted to go back in the process a little bit in terms of the model building. Are there humans involved in calibrating the model? There has to be. There has to be. So, you know, for all the hype in the industry, you still need, you know, what it is, is you need expertise to look at what these models produce, right? Because if you think about it, machine learning algorithms done by themselves have an understanding of the domain. They are, you know, either statistical or similar in nature. So somebody has to marry these statistical observations with the domain expertise. So humans are definitely involved in the building of these models and training of these models. Okay, so you got math, you got stats, you got some coding involved, and you got humans at the last mile to really bring that expertise. And then in terms of operationalizing it, how does that actually get done? What's the tech behind that? Yeah, it's a very good question, Dave. So, no, you build models, and what good are they if they stay inside your laptop? You know, they don't go anywhere. What you need to do is, I use a phrase, weave these models into your business processes and your applications, right? So you need a way to deploy these models. The models should be consumable from your business processes. You know, it could be a REST API call to be a model. In some cases, a REST API call is not sufficient. The latency is too high. Maybe you've got to embed that model right into where your application is running. You know, you've got data on a mainframe. A credit card transaction comes in and the authorization for the credit card is happening in a four millisecond window on the mainframe on old, not old, but, you know, CICS Coval Code. I don't have the time to make a REST API call outside. I got to have the model execute in context with my CICS Coval Code in that memory space. You know, so the operationalizing is deploying, consuming these models, and then beyond that, how do the models behave over time? Because you can have the best programmer, the best data scientist, build the absolute best model, which has got great accuracy, great performance today. Two weeks from now, performance is going to go down. How do I monitor that? How do I trigger alerts when I fall below a certain threshold? And can I have a system in place that retrains this model with new data as it comes in? So you've got to understand where the data lives. Absolutely. You've got to understand the physics. Yes. The latencies involved. Yes. You've got to understand the economics. Yes. And there's also probably, in many industries, legal implications. Oh, yes, you know, explainability of models. You know, can I prove that there is no bias here? Right. Now, all of these are challenging, but, you know, doable things, right? What makes a successful engagement? Obviously, guys are outcome-driven, but talk about how you guys measure success. So, for our team right now, it is not about revenue. It's purely about adoption. Does the client, does the customer, see the value of what IBM brings to the table? This is not just tools and technology, by the way. It's also expertise, right? So, this notion of expertise as a service, which is coupled with tools and technology to build a successful engagement. The way we measure success is, has the client, have we built out the use case in a way that is useful for the business? To, does a client see value in going further with that? So, this is right now what we look at. It's not, you know, yes, of course, everybody has scalable revenue, but that is not our key metric. Now, in order to get there, though, what we have found, a little bit the hard way, is, you know, you need different constituents of the customer to come together. So, it's not just me sending a bunch of awesome Python programmers to the client. Now, it is, from the customer's side, we need involvement from their data science team. We talked about collaborating with them. We need involvement from their line of business, because if the line of business doesn't care about the models being produced, you know, what good are they? And third, people don't usually think about it. We need IT to be part of the discussion, not just part of the discussion, part of being the stakeholder. Yes, so you've got, so IBM has the chops to actually bring these constituents together, have actually a fair amount of experience in herding cats and large organizations. And, you know, the customer, they've got skin in the IBM game. This is, to me, a big differentiator between IBM, certainly some of the other technology suppliers who don't have the depth of services, expertise, and domain expertise, but on the flip side of that differentiation from many of the SIs who have that level of global expertise, but they don't have the tech piece. Now, they would argue, well, we do anybody's tech, but, you know, if you've got tech, you know, there's gotta go together. Bring those two together. And that's really, seems to me, to be the big differentiator for IBM. Absolutely, Dave. Well, John, thanks so much for stopping by the Cube and explaining sort of what you've been up to, the data science elite team, very exciting. Six to nine months in, are you declaring success yet? Still too early? Ah, well, we're declaring success and we are growing, you know, just, you know. Growth is good. Lot of, lot of attention. All right, great to see you again, John. Absolutely, thank you, Dave. Okay, keep it right there, everybody. You're watching the Cube. We're here at the Westin in Midtown, and we'll be right back right after this short break. I'm Dave Vellante.