 From Orlando, Florida, it's theCUBE. Covering ServiceNow, Knowledge 17. Brought to you by ServiceNow. We're back in Orlando, everybody. Welcome, this is theCUBE, the leader in live tech coverage. And we're here at Knowledge 17. This is our fifth year doing Knowledge 17. I'm Dave Vellante with Jeff Frick. Alan Linewan is here. He's the CTO of ServiceNow and he's joined by Carl Vanderpoel, who is the, let's see, VP of products, GM of analytics, IT business management, Sam, what a VP of other at ServiceNow. Welcome, gentlemen, to theCUBE. Thanks so much for coming on. So, Carl, you guys were up today at CreatorCon, had the big keynote, talking about a really, we've been talking all week about the practical application of machine learning. So, set it up and we'll share with our audience sort of a bumper sticker of the keynote. So, yeah, I mean, machine learning, what we talked about the keynote, it's like we're a very pragmatic company. Actually, we pride ourselves in being very pragmatic. And we have to be because we serve the biggest enterprises in the world and their workflow relies on that. So when we talk about machine learning, there's a lot of hype out there. We are focusing on the things that are actually there today to increase productivity. And that's exactly what we've been doing. So, again, when we talk about IoT and big data, and natural language, and there's tons of stuff out there. And some of it is real and some of it is not real. And, Alan, the emphasis this morning in the discussion was really on simplifying, machine learning, embedding into the platform. Talk about that a little bit. Yeah, I mean, that was sort of the goal of putting it in the platform, was doing the direct continuum acquisition we did in January. And only taking the technology, embedding it in the platform. So it's just like all other pieces of functionality that developers love to use on the platform. And that is machine learning. It's something we call supervised machine learning. So it's a specific class. Like Carl says, it's not like driving cars and flying ships and Skynet or things like that. It's really all about taking specific data, building a predictive model about it. And then people access that model in their applications and really making their apps just smarter at the end of the day. Well, when you talk about DX Continuum, I remember Jeff and I, we first were exposed to ServiceNow, walking around the ecosystem. We said, wow, there's a lot of companies here that ServiceNow could acquire. And we asked Frank about it. We asked Fred about it. And both of them were consistent. Frank said, we're not going to buy anything that doesn't fit into the platform. Fred said the same thing. And you guys are pretty dogmatic about that. So when you think about an acquisition like DX Continuum, what's the process that you go through to sort of vet them, make sure that you can replatform? How does that all work? Yeah, we spend a lot of time internally thinking about that because you're right. What we don't want to do is build this Frankenstein sort of platform thing where you have- Frankennow. Yeah, Frankenplatform or no Frankennow. You don't have different styles of things you kind of cobble together. Because when you do that, this one is on RevX and then this one's on RevY and this one revs to X.1 and it doesn't compatible with Y.2. And you end up with this mess of things trying to all connect. So instead we have a consistent data model. We have a consistent way of doing things. And when we are vetting companies, we look to see if we can take what they built and put it into our model. If it is completely orthogonal to how we built things, you know, we'll probably call time out on it and say we're not so sure that we really want to replatform this and move down that path. But we do spend a lot of time thinking about Carl's company at Mirror42 was actually our first acquisition. And I remember being involved in that and getting him over the line and then getting him replatformed and built into our data model. It's like great performance improvements, better integration with our platform and that's really the benefit we want to have for our customers. Yeah, I mean I think Jeff, yours was one of the first companies we saw and I think we talked to you on the floor of $1. I remember that. 2013. Wow, it was amazing. He had orange shoes, by the way. He had wooden orange shoes. Yes, that's right. That's right. And caramel cookies. So when it came to DX Continuum, from a product perspective, what was your angle there and what was the discussion like? It's internally. It's exactly what Alan says. You know, we walk out on a lot of opportunities because we don't believe that we can replatform them or it's just too wide of a gap. So it's like, you know, is the solution there, is the technology there? Is there a direct fit with what we are doing? And we look obviously at the team and then the replatforming is a big part of the due diligence. And you know, if we take all those boxes, we'll move. And one of the most important things after the replatforming is we see such rapid adoption of those acquisitions. So yes, you know, we'll delay the time to market. We will replatform first, which typically takes six to nine months. With DX Continuum is actually really fast. Of course, really fast. And with analytics took a little bit longer first time, but yeah, six to nine months on average. But then after that, when you launch it, and especially with, you know, I think the parallel between analytics and machine learning, they're both engines. They're both engines that make every single app that we build on this platform better. Whether it's IT service management or HR, customer service, they both ban all it from embedded analytics and embedded machine learning. So the adoption that you see in one or two years later after that acquisition is so well worth it. So it's fine away at six to nine months. We're not in a hurry with these things. We want to do it right. We want to make sure it's enterprise of scales that it sits in the nonstop cloud. And that, you know, we're going to bake it in into everything that we do. So a little bit more like due diligence upfront, but then the payoff is just, you know, 10 times bigger. But I would imagine too, part of the re-platforming from the entrepreneur's perspective coming in is they kind of get a second chance, right? They have the experience with what they've built. They've got the team with which they've built it. Now they're kind of rebuilding it a little bit into the new platform so they can, you know, maybe fix a couple of things that, you know, make some mistakes from the early days. Never met an engineering team that didn't want to re-engineer something and refactor something. So giving them a chance to refactor it. And by the way, be closer to the cloud and actually closer to the bare metal get the performance impact of doing that. You know, a lot of the folks that we partner with have generally integrated with us. So you talk to us over an API. Take that API away. You take that loose coupling away and put them right into the platform. And then they're hardwired in. They're hardwired in and, you know, they're in the matrix at that point and they're just wired in the way it goes. I want to ask you guys about this notion of machine learning on your instance only. The guys at Wikibon, we've been sort of advising our companies, it was kind of a caveat emptier, beware the model and he'll be set it up. So everybody says, every cloud company, it's your data. Okay, but your data is feeding a model and it's training a model. And so what's happening is the dividing line between the model and the data is getting very gray area. And then companies are taking that model and then they're using it for other companies. So our concern is how do you protect your IP? Yes, it's your data, but if your data is feeding the model and your model is being used over here, well, you're giving your IP to your competitor not even knowing about it. How do you guys address that? Restoring or mister? Well, I can start you a good feeling. No, that's exactly right. So, you know, we don't do that. We really, you know, you have your data in your own database in your own instance and we do not co-mingle data. We do not, we will not and we only do that if you ask us to for benchmarking, for instance, then we anonymize it. But for machine learning and building that training set, it really is training on the data of that customer and that customer only. So we're not going to aggregate and there's actually another big benefit of that. The model that you then, the prediction model that you generate on your own data that flows free, you know, that serves your business processes and your workflows. Obviously, if you only train on your own data, you will get a prediction model that is optimized for your organization, your processes and your workflows. So, the prediction model will be more accurate. So there's benefits. I mean, there are use cases where you want to aggregate over everything. That's just not us as a company. If you're Facebook or Twitter, you probably want to apply machine learning across all the tweets and all the posts. But for ServiceNow, we really look at, you know, the data for a single customer. Yeah, I mean, I think you know the call. I mean, I think that we try to do is we're not a consumer player. We're an enterprise player. Enterprises want to own their data. It's their data. It's their IP to your point, Dave. You know, we're not building a data lake and then trying to like pick the droplets out that are relevant to you, right? We're building a set of information that's driven off your data that helps you drive decisions about your enterprise. So the canonical thing we talk about is printer down for our CEO is probably a higher priority event than printer down for somebody else, right? So we need to think about how does that data actually get trained within the enterprise? Now, there might be other companies where printer down is critical for their business because they're maybe a printing company. So trying to disseminate that information when you aggregate that and you co-mingle that data it has IP implications. It also has just relevancy implications. So if you're writing applications and you want to query an API or query a system, you want to know that that's going to be relevant to you. And if you're not using only your data, using like everything in the library, for example, to find a word, not that relevant at times. Right, and you mentioned Twitter and Facebook. They're pretty safe bets because they're consumer, but Amazon, Google, you know, we've been pressing those guys and we want to hear more from them, IBM as well. And they've been pretty forceful. I think they're giving strong answers but we want to see more evidence. I think you guys, with your architecture and the way you're applying machine learning in the models actually is self-protecting, right? The multi-incense architecture has the benefit of not co-mingling data. So we would like, we don't want to lead with our chin and say, well, for machine learning we're now going to co-mingle data. Doesn't seem like a really smart way to enter the ring, right? Let's talk a little bit about the cloud. We haven't talked much about your cloud this week. Which is interesting. Three days, no talk of cloud. I mean, very little. So, you know, and when we first- Presumed to work now, you're doing such a good job. Well, when we first met service now, you know, we were observing all these hyperscale companies. We said, well, service now is kind of not really hyperscale like an Amazon or a Google, but pretty big. And now you're seeing a lot of SaaS companies say, all right, we're going to run on Amazon. You see more than a handful. You guys are, again, pretty dogmatic about your cloud, your application availability. So talk about, you know, maybe you could address that hyperscale thing. Are you, aren't you? I mean, does it matter? And, but more importantly, you know, why so dogmatic about your cloud? Isn't it more expensive? Why is it better for your business and your customers? Sure, sure, happy to talk with that. So, first of all, at scale, I think what really matters is to make sure you have data for our customers in the right geographies. So, you know, we can have a contest about server counts and MIPS and bandwidth, but I don't really think that matters to enterprises. What really matters to enterprises is do I get the compute storage and networking needs to drive the application, to drive the workflow? So we're building things. We have, you know, 16 global data center regions. We continue to extend them. We continue to build our footprint. We continue to make sure that we have the resources to drive our customers' applications. Now, in terms of, you know, being dogmatic about doing our own, yeah, we are. And the reason we are is because we think that when you leverage somebody else's infrastructure, one, you can't optimize it exactly the way you want. You can't get 10 gig pipes directly to your servers. You can't build out infrastructure to the networks you want. You can't get the storage ecosystem that you need to build in and do direct storage the way you want to do it. There's lots of little ways you sort of twist a knob on optimizing the cloud that really end up building a better product for the customer when you do it yourself. Generally, it leads to your point, Dave. People say, well, isn't it more expensive? And, you know, the end of the day is we spend a lot of time optimizing how we use the hardware. Now, if we really just took a separate piece of silicon and aluminum and racked and stacked for every individual customer, yeah, that'd be a lot more expensive. But we spend a lot of time doing some of your capacity modeling and taking the servers and storage. We have deployed and really fine-tuning it for our customer's needs. And we found out that it's incredibly cost-effective for us. So how much of the, for instance, 30%, if you have to take the contribution pie when CJ stands up and says a 30-plus percent performance improvement in Jakarta. How much of that is from code optimization versus cloud and the combination of the two? It's got to be the combination of the two. I mean, there's all these things we're doing. We're revving a hardware stack. We're always buying new hardware and server and storage for our customers, making sure we're on the leading edge of that. There's clearly some coding things that we've done as well, looking at new architectures on the UI and new architectures on the back end. You hear what we're doing metric-based in order to store large amounts of data and very small data footprints. So it's a combination of all that that comes together that really is incredibly effective. So the inference is that if you were running on a public cloud, somebody else's public cloud, you wouldn't have been able to get maybe that much performance, get some performance, but maybe not as much. Is that a fair assertion? Yeah, I mean, the way I say it when I talk to our teams about it and our customers about it is we want to be able to draw a thread. We want to draw that thread from the application all the way down through the storage, into the network, all the way down to the racks and into the fiber. We want to understand everything and optimize that, both for performance as well as cost and availability. So if you can't really do that, if you're working on a public cloud, where they won't even tell you where your server's at. It really sounds kind of like Facebook. Facebook's description of their buck because it's one big application. And so they can optimize for IO. They can optimize for CPU. They can optimize for storage for, because it's really just one big giant application that you can really tweak that hardware for your specific demands. It's exactly service management, right? We are, it's end-to-end service. We have from beginning to the end. We do not outsource parts of the service and say, we're hosted somewhere else. We are responsible for end-to-end service for our customers, from code to performance to everything else in between. So, I mean, in a way, we're practicing what we're preaching. So, from the product guys' perspective, you don't get up somewhere and say, ah, I'm just going to do it. I'm going to spin up some EC2 today. Yeah, not a good idea. You can't get what I need. I want to talk, Carl. I mean, other than the fact that you get your butt kicked for trying to do that. But no, seriously. I mean, other times where you'd say, ah, I just, I really can't get what I need out of my own cloud. Honestly, I can't think of an example. I mean, we've, the challenge that we also have is, whenever you, imagine you would do that, right? You'd say, like, I want to test something and we would allow developers to spin up an EC2. And then they want to test it on real customer data. Data doesn't leave our cloud. So, you couldn't do the same amount of testing. You couldn't do upgrade testing. You couldn't do performance tuning. You couldn't work with real customer examples. So, because data doesn't leave our cloud. So, we basically, we only develop, we truly only develop, when it comes to productizing, we only develop on our own. And it's not a source of friction. No, in terms of, nope. I was just going to follow up on your point, Jeff. I mean, if you look at all the biggest companies out there, right? Google, Facebook, Salesforce, ourselves. We're doing it on our own, right? Right, right. Well, and you know, I mean, to the cost question, if all you were doing was infrastructure as a service, I would say the future is bleak. Fair enough. Because the marginal call economics of Amazon is just, but to compete with those type of cost structures, you have to have value up the stack. You have that value. And I would say the same thing, by the way, for IBM and Oracle and virtually any SaaS player who has the stomach to do what you guys are doing. It's not easy work. It's not easy work. And the thing, the other advantage we have bluntly is we're such an amazing product and amazing ecosystem of customers that people want to come work on the cloud because you need the talent to do it. I mean, there's clearly companies that don't have the talent to do it. And then they do just outsource. And that's perfectly legit. But if you do have the stomach to make the investment in the talent and the team to do it, you just can end up with a superior service. Okay, I want to switch subject. Software asset management, the biggest hoot of the week. Hoot as in, whoo, whoo, whoo. Because there's always a big clap. It's not a golf clap, you know. There's always something that's super exciting that the crowd genuinely goes crazy for. That was software asset management out. Let me set this one up. So you guys have been very politically correct talking about some vendors and audit. I've written a lot about Oracle negotiations in my life. And for an Oracle customer, I think CJ gave the stat that 25% of a budget is licenses. Many Oracle customers, and we've done hundreds of assessments of Oracle customers. Sometimes it's high as two thirds, the 75% of the budget is software, license and maintenance costs. And Oracle in particular, but others, use audits as a weapon, a negotiating weapon, and they mop up the client base. Look at Salesforce, Salesforce did a billion dollar deal with Oracle, I guarantee there was an audit behind that. So, software asset management is a huge opportunity for your customers and yourselves. Talk about what you announced and why it's important. Yeah, and let me start with saying is like, well, you know, we'd like to take credit for coming up with that idea, but we did not. It was our customers who told us over and over and over and over again, we want you to get into the software asset and business. You own the CNDB, we have all the contracts in your system, we already doing asset, we also store the software assets in there. You got to help us normalize, reconcile and optimize the software licenses. And so I think about 18 months ago, we kind of took the decision and we looked at make or buy and the replatforming was an issue. So we decided to go on our own and we started building this. And yeah, with great success and I think this is probably the best launch that I've seen. It's definitely better than the analytics launch which I was part of, which is the first time that we did this, but we're completely ready. We kind of know what to expect. So even the fact that we got a big tier and the labs and all the sessions were completely full house people standing in the back. So a lot of interest, but we've already been working with some of these customers. So we know how good it is. We already had a panel. So design partners and customers telling how accurate and how helpful this was. So it's exactly that. It's, you know, we're focused on Microsoft and Oracle in this first release for protecting against audit. We normalize everything on the client side and on the server side. We have optimization packs for Microsoft and Oracle and then you'll see us, you know, coming out with all the other ones, the other IBMs and OVPM or Citrix, you name it. So that's going to follow on. So very exciting times. You also announced cloud management. I'll give you another idea that we hear from a lot of customers and you could maybe apply AI machine learning. I wonder if you've encountered this, but AWS in particular, because it's such a popular, you know, infrastructure as a service provider. Probably has, I don't know, 10 to 15 different data interfaces. So whether it's Kinesis, EC2, S3, Dynamo, DB, you know, Aurora, Redshift, on and on and on and on. Each is its own primitive API. Customers are very confused as to which one to use. And they would love help in figuring out, okay, what's the best horse for the course and how can I optimize, you know, my spend? As opposed to looking at it at the end of the month and saying, ouch. So it strikes me when I was listening to some of the things that you guys were doing, you know, this week, that that's another one of the many problems that you could solve. Is that something, if I come to you as a service now customer with that problem, can you help me? All done. We do, with the cloud management platform, we don't necessarily help you build your app on Amazon, but we can build these things called blueprints. And the blueprints do describe the various primitives and how those primitives go together. And then once those applications and those puzzle pieces are put together into a full-on blueprint and launched into the cloud service, we do then gather all the costing information off that and then provide you a look at your spend across various different types of clouds. So we can do that. Now we're not going to say use RDS over here versus Redshift versus Glacier versus, you know, S3 sort of thing. But you're going to make it easier for me to make this. We're going to make it easier for you to make decision and we're going to show you the cost around that and then easier for you to replicate that decision so you don't have different teams applying different blueprints to try and build some of the buildings. I mean, there's probably a thousand of examples like what I just gave you in your custom base. Power the platform, Jeff, right? Right, absolutely. And hopefully people manage work, right? It's a pretty simple concept. And we're trying to make them build a work item called a blueprint and be able to use it and leverage it across multiple disciplines. And actually the two are a little bit related, like the cloud management, the software asset management, if you add in another product that we added in Istanbul application, portfolio management. Think about it this way, it's like, you know, number one of the biggest priorities of CIOs is, you know, I got a thousand apps I need to rationalize. I need to pick a couple of platforms and I want to rationalize. That's really what APM is. And then when you rationalize, you're going to go either to the cloud where cloud management comes in. And when you do, you need to, you know, reclaim some of the licenses of the old stuff that you're getting rid of or you need to optimize your licenses for the ones that you keep investing in. So, you know, although they're three separate products, you know, they're really part of that one single conversation of that, you're tackling that one issue. It's like, okay, rationalizing the apps, are we going to the cloud? Are we going to build them on platforms? What are we going to do? How do we optimize the cost and be compliant? So it's, sometimes they look like different products, but it's really not, it's one conversation. Right, right. All right, Jess, we have to leave it there. Thanks so much for coming on theCUBE and coming on together. You know, we saw you on the keynotes this morning. It said it'd be great and you guys are always excellent guests. Thanks so much. Thanks very much. Thanks, Jess. All right, welcome. Keep right there, everybody. We'll be back right after this short break.