 From Orlando, Florida, it's theCUBE. Covering ServiceNow, Knowledge 17. Brought to you by ServiceNow. We're back, welcome to Orlando everybody. This is ServiceNow, Knowledge 17, hashtag No17. I'm Dave Vellante with my co-host, Jeff Frick. Dave Wright is here, he's the Chief Strategy Officer of ServiceNow and a long time Cube friend. Good to see you again, David. Good to see you again, guys. So, off the keynote, we were just talking about intelligent automation and what's new in your world. New way to work is really kind of the broader theme here. People are changing the way they work, right? I mean, so what is intelligent automation and how does it fit in? So, what we did when we built intelligent automation is we wanted to come at it from a different angle. So, we didn't want to build a product and then look for a solution that it worked with. We wanted to go out and speak to people and see what are the challenges that they faced. So what we did was we came up with kind of four key areas where people wanted to be able to improve or do things differently. We wanted the capabilities to be able to predict when something was going to happen from an event perspective. We wanted to be able to use machine learning to be able to augment tickets. So to be able to perhaps auto-categorize or provide severity or in the case of change provide risk analysis. We wanted to be able to do that at a machine level rather than use a human triage level. Then people were coming back saying, well, we need to, we feel we're doing a good job but we want to understand if we're doing a good job. So that was the concept of expanding out the benchmarks program to include more and more benchmarks for people to see how they compared against their peers. And the final element was people wanted to set themselves performance targets but then they wanted to understand when am I going to get to that target. So what we had to do then was augment the whole performance analytics suite to be able to do predictive analytics. So they're kind of the four core areas that sit in the intelligent automation engine. I mean, we can go into as much detail as you want around them but it's pretty interesting. So help us understand, because I get a little confused about when I hear something like a big announcement coming up, Jakarta platform, but then I see bits and pieces hit the various products. You maybe set that up for us and help us understand. Yes, so what will happen is the benchmarking, the predictive analytics capability and the ability to do predictive service usage, they will appear in Jakarta. And then the actual ML side where we can do the also categorization that will appear in the Kingston release. So by the end of the year, everything that's shown will be available. And it hits the platform and then the modules take advantage of that, is that correct? So what's happening at the moment is the initial use cases have gone through around IT. So it's IT looking at, well, how do we process events so that we can get a precursor to a bigger issue and predict the bigger issue? How do we categorize when someone comes in with an IT request or an IT incidents? How do we make sure it goes to the right people and gets the right categorization? And then what will happen over time is we'll be able to use that for the security module. We'll be able to use it for customer service, for human resources. Because it's all, in the same way we said, it's all a different type of service. It's exactly the same process to be able to categorize, to prioritize, to put a severity on something. And then more long-term we can use this technology to look at all kinds of different files on the system. And when you say IT first, it's ITSM and ITOM, is that right? Yes, ITSM and ITOM. Oh, okay, and so yeah, I like this. This is a very practical example of generally AI because people don't really know what it is. You got to tell us that something's going to break before it breaks is really kind of the use case here, right? What we realized is because we can now start to look at time series data and analyze time series data, there was a few things we could do. So the first thing is we could do correlation. So we could start to link events together so people didn't spend the ages just trying to fix the symptoms. They could go right down to the disease and say, well, this is what's causing everything else. The other thing we could build in because we could understand what normal looked like is we could build in anomaly detection. So normally in events is, hey, this has got a high CPU or this switch has gone down. Now we could say, this just looks weird. We've got like, we've got an activity that never normally happens to this level or it never normally happens at this time of day or we've never seen this before on a Saturday. And we can actually generate an anomaly alerts at that point. Now the anomaly alert might be a precursor to a traditional alert where you might get, I think the example we used in the actual keynote was we get a large number of user threads on a system. That's probably a precursor to high CPU. So once we've started to be able to do that correlation, the more and more examples you get, the more you can start to predict. So you can say, as soon as I get that precursor, I have a level of confidence of when we're going to see the next events. So now you get a brand new type of incident. You'll get an incident for a predicted failure. So the system will say, I've seen this, this and this. I'm 86% confident we've got two hours and we're going to lose this service. So the whole concept of this was how do you work at lightspeed? And my whole challenge was, what happens when you do it before it happens? Is that beyond lightspeed? It was very difficult to try and wrap your mind around it. The speed of light is too damn slow. Yeah, it's too slow and I was going to wait for it. I did get a tweet back where someone said, if you fix everything before it happens, you will get no budget because everyone will say nothing ever happens. If a tree falls and nobody's around. And so there's a risk, sort of risk scoring algorithm in there that helps you say, okay, this one is going to fail and you better take advantage of it. So if you imagine seeing a precursor to something, you look how many times that precursor has caused that events that allows you to give a degree of probability as to how likely you think it's going to happen. And it might be you decide to set a threshold and say, look, if it's below 50%, don't bother doing it. But if it's above 70% do it or if it's a specific type of issue, if it's something around security and you're above 90% confidence, I want it flagged as a priority on issue. Yeah, but if it's my picnic wiki, so can you inject the notion of value in there, I guess is the question. Yeah, you can. I want to ask you about this categorization piece, even though it's coming down the road with Kingston, that's been a challenge for organizations in so many different use cases. I mean, the one I could think of, it's like email archiving the federal rules of civil procedure, all that stuff when electronic records became admissible. Right. And everybody sort of scrambled to categorize, but it was manual, they were using tags, it just didn't work, it didn't scale. So the answer was always technology to auto-categorize at the point of creation or use. But even then it was complicated and the math kind of worked, but it couldn't apply it. What's changed now? And what's the secret sauce behind it? Is it was that part of the DX Continuum acquisition and maybe you could point it a little bit. So we acquired DX Continuum, that gave us eight really bright maths PhDs or data scientists who could come in and could look at data in a different way. But I think technology also drove it, so you got the ability to have the compute power to be able to do the number crunching, but you got the volume of data as well. I think the more volume of data you get, the more accurate it is. So we found if we're going to train also categorization, we need between 50 and 100,000 records to be able to get to a degree of accuracy. And then obviously we can just keep on doing it again and again and that accuracy gets better and better over time. But even when we ran this out of the box on our system for the very first time before we'd rewritten it on the platform, first time we ran it through, it was 82% accurate straight off. Now the real interesting thing about when you do something like categorization, it's almost as important what you get right as not guessing when you're going to get it wrong. So we wanted to be very sure that the system would say, I am 100% confident that this is where this is, but if I don't know it, I'm not going to guess. I'm not going to say, well, 75% confidence, so I'm going to say it to this. At that point you want to say, I just don't know. So these 18%, for example, in this case I don't know. And then over time you get to reprocess the things that you don't know and that percentage gradually goes up. So now I think in-house we're running into the 90% region. So the math, though, has been around forever. I mean, things like support vector machines and there are other techniques. What is it about this day and age that has allowed us to effectively apply that math and solve this problem? So I think what you get now, if you look at the DX Continuum technology used about, I think it was five different methodologies for being able to interrogate. And it was neural nets, it was using Bayes. But I think what gives you the big advantage is people have always taken live data and then tried to do this prediction. That's probably the wrong way to do it. If you take historical data and then run it, you just find out which one works. And if this algorithm's working the best for you based on the way you structure your data then that's the algorithm you focus on. And that's exactly the way predictive analytics works. What we do is we were initially looking saying, okay, well, we've got these three different models we can use. We can use projection, we can use seasonal trend loads, we can use Arima with the auto regressive move and average type solution. Which one are we going to use? And then we realized we didn't need to guess. What we could do is we could give the system historical data and say which one of these most accurately maps and then use that algorithm for that data set. Because every data set's different. So you might look at one data set where it's really spiky, so you don't want to use projection because if you choose the wrong points your projection album is effectively out. So it might be in that case you want to use STL and be able to smooth out some of the curves. So you have to, every time you want to do predictive analytics around a specific data set you need to work out what mathematical model you need to use. So the data is then training the models and the models are your models, correct? And then now you tell the customer, and I'm sure you do, that this is your data. Your data's not going to be shared with anybody outside of your instance. But the model, the distance between the gray area between the model and the data, they start to blend together. Is there concern in your customer base about I don't want the model that you train going to my competitors? Or is this a different world where they feel as though, hey, I want to learn, like security. What are you seeing there? So this is the uniqueness that we, you don't get a generic ML where we look at everyone's instance and train across that. We can only train for your instance. And that's because everyone does things differently. You go to some companies, their highest priority issue is a 7.9, whereas another customer would have 7.1. So you've got people doing different implementations like that. But let's say I tried to do everyone's, and I went through and I said, hey, look at this description. This is a networking issue, so I'm going to categorize it as networking. And you haven't got a networking category. You've got networking infrastructure or networking hardware, then it fails. So I have to build a model that's very specific to your instance. So every time we do this, we'll build it for each customer. So it's kind of customized, artificial intelligence, machine learning models that sit within your instance. My data, your model, that you're basically applying for me and only me. Yeah, so we do the training on your data and we inject that model, which is your model back into your instance. And now the benchmarks, you guys have been talking about benchmarks for a while. This is sort of taking it to a new level. So how do you roll that out? How do you charge for it? What's the strategy? So what people do is they effectively subscribe to it. So they're willing to share their data where at that point allowing them. So it's almost a community issue. At this point, everyone's sharing data across the systems. Now, we added another nine benchmarks in the Jakarta release. So now I think there's 16 benchmarks. At the moment, we're focused around IT and ITOM. But as we get more and more customers coming in on CSM and more on HR and more on security, we'll be able to start to introduce the whole concept of benchmarking those as well. But the thing you can do now is you, you don't just see the benchmark and how you perform. We can also use analytics to show how you're trending as well. So you might be, you might be better than people of a similar size or people in the same industry, but it might be that you're trending down and you're actually going to start to get close to being worse than them. So the concept here is you can take corrective measures, but also it gives a lot of power to customers. Not just to be able to say, I think I'm doing a good job, but to be able to go to senior management and say, this is how customers that look like us are currently performing. This is how customers in the finance sector perform. This is how customers with a hundred thousand people are more perform and they can see, look, we're leading in this, this and this area and they can see whether they're not leading and they can actually start to see how they'd address that. Or it might even be that you start to build relationships where they could say to their account manager, who are the people who've got this best in performance type thing? Could we meet with them? Could we exchange with them? The evolution of this will be on the performance analytics side when we start to get to Kingston and beyond, we'll be to be able to do, not just the predictive analytics, but to be able to do modeling and to be able to do what if. And the end goal is, we've gone to the point where we've got to predictive. You want to get to the point where you get to prescriptive where the system says, this is where you are. If you do this, this is where you'll get. That's what I was going to ask you. Is it intuitive to the client what they should do and what role does service now play in advising them? And you're saying in the future, the machine is actually going to advise. Yeah, you could be able to say, hey, well, if you want to, let's say you want to improve your problem closure rates. You could say, well, when you look at other customers an indicator of this is people have got much better first call incident closure. So what you need to do is you need to focus on closing first call incidents because that's going to then have the knock on effect of driving down the way you resolve problems. So we'll be able to get to that, but we'll also be able to allow people to actually model different things. So they could say, well, what happened if I increased this by 10%? What happens if I put another 10 people working on this particular assignment group? What's the effect going to be? And actually start to do those what if models and then decide what you're going to do. To prioritize the investment. Absolutely. To get the numbers down. It's just interesting too, because it's a continuous process, as you mentioned. It's this whole, you know, do the review once a year to your KPIs. You know, that's just not the way it works anymore. You don't have time. And to use the integration of the real-time streaming data, which is interesting, but you said not necessarily always which one of you's first compared to the historical data that's driving the actual business models and the algorithms. And I think the thing about the whole benchmark concept is it's constantly being updated. So it's not like you take a snapshot and you say, okay, we can improve and move here. You see if everyone else is improving at the same time. So there might just be a generic industry trend that everyone's moving in a certain direction. It might be that as we start to see more things coming online from an IoT perspective, I'll be interested to see whether people's CMDB starts to expand because I don't know if people have yet established whether IoT is going to be responsible for IoT because it's using the same protocol for its messaging. How are you going to process those events? How are you going to deal with all that? So I want to, that gets me to man versus machine. Machines have always replaced, you know, humans. But for the first time, it really is happening quickly with cognitive functions. Right. And one of your speakers at the CIO event, Andrew McAfee and his colleague Eric Binyalsen have written a book and in that book, they talked about the middle class getting kind of hollowed out and they theorized that a big part of that is machines replacing them. One of the stats is the median income for US workers has dropped from 55,000 to 50,000 over the last decade. Right. And they posited that cognitive functions are replacing humans and you see it everywhere. Yeah. You know, billboards, you know, the kiosks at airports, et cetera. Should we be alarmed by that? What is your personal opinion here? And I know it's a scary topic for a lot of IT vendors, but it's reality and you're a realist and you're a futurist. What are your thoughts? Share them with us. So people have different views on this. If you look at the view of executives, they see this as potentially creating more jobs. If you look at the workforce, I completely agree with you. There's a massive fear that, yeah, this is going to take my job away. I think what happens over time is jobs will shift. People will start doing different things. You can go back 150 years and find that 90% of America's work in farmland and you can come now and you can find out they're like 2%. Not too many software engineers either back then. Well, not too many. Hard to get that mainframe in the field. What I think you can do is you can not just use AI or machine learning to be able to replace the mundane jobs or the very repetitive jobs. You can actually start to reverse that process. So one of the things we see is initially when people were talking about concepts like chatbots, it was all about how do you externalize it? How do you have people coming in and being able to interface to a machine? But you can flip that and you can actually have a bot become a virtual assistant. Then what you're doing is you're enabling the person who's dealing with the issues to actually be better than they were. An interesting example is if you look at something like the way people analyze sales prospects. So in the past, people that have a lot of different opportunities they were working on. And the good sales guys would be able to isolate what's gonna happen, what's not gonna happen. What I can do is I can run something like a machine learning algorithm across that and predict which deals are most likely to come in. I then can have a sales guy focusing on those. I've actually improved the skills of that sales guy by using ML and AI to actually get in there. I think a lot of times you'll be able to move people from a job that was kind of repetitive and dull and be able to augment their skills and perhaps allow them to do a job that they couldn't have done before. So I'm pretty confident just based on the impact that this is gonna have from a productivity perspective where this is gonna go from a job perspective. There's a really cool McKinsey report and it talks about the impact of the steam engine on what that drove on productivity and that was a 0.3% increase in productivity year on year over 50 years. But the prediction around artificial intelligence is it'll produce a productivity increase of 1.4% for the next 50 years. So you're looking at something that the people are predicting could be five times as impactful as the Industrial Revolution. That's pretty significant. Next Machine Age, this is a huge topic. We're out of time, but I would love for you, Dave, to come back to our Silicon Valley studio and maybe talk about this in more depth because it's a really important discussion. Now I'm always around, having to do it. Thanks very much for coming on theCUBE. It'd be great to see you again. Thanks guys. Right there, everybody. We'll be back with our next guest right after this short break. Right back.