 From Washington, DC, it's theCUBE, covering ScienceLogic Symposium 2019, brought to you by ScienceLogic. Hi, I'm Stu Miniman, and this is theCUBE's coverage of ScienceLogic Symposium 2019, here at the Ritz-Carlton in Washington, DC. About 460 people here. The event's grown about 50%. I've been digging in with a lot of the practitioners, the technical people, as well as some of the partners. And for this session, I'm happy to welcome the program, first-to-first-time guest, Syrenda, who is the Vice President and CTO for Automation in IBM's Global Technology Services. And joining us also is Dave Link, who's the Co-Founder and CEO of ScienceLogic. Gentlemen, thank you so much for joining us. Thank you for having us. All right, so Syrenda, let's start with you. Anybody that knows IBM, services at the core of your business, primary driver, large number of the presenter or the employees at IBM are there, you've got automation in your title, so let's flesh out a little bit for us. You're part of the organization and your role there. All right, so as you pointed out, IBM, a big part of IBM is services. It's a large component. And the two major parts of that, and though we come together as one in terms of IBM services, one is much more focused on infrastructure services and the other one on business services. So the automation I'm dealing with primarily is in the infrastructure services area, which means all the way from resources you have in a person's data center, going into now much more, of course, in a hybrid environment, hybrid multicloud, with different clouds out there, including our own, and providing the automation around that. And when we mean automation, we mean the things that we have to do to keep our clients' environments healthy from a availability and performance standpoint, making sure that that environment then, we respond to the changes that they need to the environment because it obviously evolves over time. We do that effectively and correctly. And certainly another very important part is to make sure that they're secure and compliant. So if you think of a Maslow's hierarchy of the things that IT operations has to do, that in a nutshell sums it up. That's what we do for our clients. So Dave, luckily we've got a one-on-one with you today to dig out lots of nuggets from the keynote and talk a bit about the company, but you talk about IT operations and one of the pieces, I've got infrastructure, I've got applications, ScienceLogic sits at an interesting place in this kind of heterogeneous, ever-changing world that we live in today. It does, and the world's changing quickly because the cloud's transforming the way people build applications. And that is causing a lot of applications to be refactored to take advantage of some of these technologies. The especially focused kind of global scale, we've seen them, we've used them, applications that we use on our phone. They require a different footprint and that requires then a different set of tools to manage an application that lives in the cloud and it also might live in a multi-cloud environment with some data coming from private clouds that populate information on public clouds. What we found is the tools industry is at a bit of a crossroads because the applications now need to be infrastructure aware, but the infrastructure could be served from a lot of different places, meaning that you've got lots of data sources to sort together and contextualize to understand how they relate to one another real time. And that's the challenge that we've been focused on solving for our customers. All right, Syringer, I'm wondering if we can get a little bit more to automation. And we talk automation, there's also, IBM used for a number of years, cognitive. And there was the analyst that spoke in the keynote this morning, he put cognitive as kind of this overarching umbrella and underneath that you had the AI and underneath that you had the machine learning and deep learning pieces there. Can you help tease out a little bit for IBM global services and newer customers? How do they think of the relationship between that ML, AI, cognitive piece and automation? So I think the way you laid it out, the way it was talked about this morning absolutely makes sense. So cognitive is a broad definition and then within that, of course, AI and there are different techniques within AI. Machine learning being one, natural language processing, natural language understanding, which are not as much statistically driven as being another type of AI. And we use all of these techniques to make our automation smarter. So oftentimes when you're trying to automate something, there can be very prescriptive type of automation. Say a particular event comes in and then you take a response to it. But then oftentimes you have situations where you have events, especially what Dave was talking about when an application is distributed, not just a classical distributed application, but now distributed over infrastructure you may have. Some of it may be running on the mainframe. Some of it actually running in different clouds. All of this comes together. You have events and signals coming from all of this and trying to reason over where a problem may be originating from because now you have a slow performance. What's the reason for the slow performance? Trying to do some degree of root cause determination, problem determination, that's where some of the smarts comes in in terms of how we actually want to be able to diagnose a problem. And then actually kick off maybe more diagnostics and eventually kick off actions to automatically fix that or give the practitioner the ability to fix that in an effective fashion. So that's one place. And the other areas, so that one type of machine learning, shouldn't say one type, but generally machine learning techniques lend themselves to that. There's another arena, of course there's a lot of knowledge and information buried in tickets and knowledge documents and things like that. And to be able to extract from that the things that are most meaningful and that's where the natural language understanding comes in. And I carry that with the information that's coming from machines which is far more contextualized and to be able to reason over these two together and be able to make decisions. So that's where the automation. I wonder if we can actually let some of those terms, I want to up level a little bit. All right, sorry. I hear knowledge, I hear information. It's the core of everything that people are doing these today, data. And what I heard and was really illuminated to me listening to what I've seen of science logic is that data collection and leveraging and unlocking value of data is such an important piece of what they are doing. From an IBM standpoint and your customers where does data fit into that whole discussion? How do things like science logic fit in the overall portfolio of solutions that you're helping customers through either managed or deploying and services? So definitely in the ITOps arena, a big part of ITOps at the heart of it really is monitoring and keeping track of systems. So all sorts of systems throw off a lot of data whether it's log data, real-time performance data, events that are happening, monitoring of the performance of the application. And that's tons and tons of data. And that's where a platform like science logic comes in as a monitoring system with capabilities to do what we call also event management. And in the old days, actually you probably would have thought about monitoring event management and logs as somewhat different things. These worlds are collapsing together a bit more. And so this is where science logic has a platform that lends itself to a marriage of these spaces in that sense. And then that would feed a downstream automation system of informing it what actions to take. Yeah. Dave, you want to comment on that? I've got some follow-ups too. But yeah, there's many areas of automation. There's layers of automation. And I think Syrenja's worked with customers over a storied career to help them through the different layer cakes of automation. You have automation related to provisioning systems. The provision, and in some case provision based on capacity analytics. There's automation based on analysis of a root cause. And then once you know it, conducting other layers of automation to augment the root cause with other insights so that when you send up a case or a ticket, it's not just the event, but other information that somebody would have to go and do after they get the event to figure out what's going on. So you do that at time of event. That's another automation layer. And then the final automation layer is if you know predictably about how to solve the problem, just going ahead. If you have 99% confidence that you can solve it based on these use case conditions, just solve it. So when you look at the different layers of automation, science logic is in some cases a data engine to get accurate, clean data to make the right decisions. And in other cases we'll kick off automations in other tools. In some cases we'll automate into ecosystem platforms whether it's a ticketing system, a service desk system, notification systems that augment our platform. So all those layers really have to work together real time to create service assurance that IBM's customers expect. They expect perfection. They expect the excellence, the brand that IBM presents means it just works. And so you've got to have the right tooling in place and the right automation layers to deliver that kind of service quality. Yeah, Dave, I've been one of the things that's really impressed me is that the balance between on the one hand, we've talked to customers that take many, many tools and replace it with science logic. But we understand that there is no one single pane of glass or one tool to rule them all. The theme of the shows, you get the superheroes together because it takes a team. You give a little bit of a history lesson, which resonated with me. I remember SNMP was going to solve everything for us. But there's a lot of focus on all the integrations that work, so if you've got your APM tools, your ITSM tools, or things that you're doing in the cloud, or it's the API economy today. So balancing that, you want to provide the solutions for your customers, but you're going to work with many of the things that they have. It's been an interesting balance to watch. Yeah, I think that's the one thing that we've realized over the years. You can't rip and replace years and years of work that's been done for a good reason. I did hear today that one of our new customers is replacing a record 51 tools with our product. But a lot of these might be shadow IT tools that they've built on top of special instrumentation they might have for specific use cases or applications or a reason that a subject matter expert would apply another tool, another automation. So the thing that we've realized is that you've got to pull data from so many sources today to get machine learning, artificial intelligence is only as good as the data that it's making those decisions upon. So you've got to pull data from many different sources, understand how they relate to one another, and then make the right recommendations so that you get that smooth service assurance that everybody's shooting for. And in a time where systems are ephemeral, where they're coming and going and moving around a lot, that's compounding the challenge that operations has not just in all the different technologies that make up the service, where those technologies are being delivered from, but the data sources that need to be mashed together in a common format to make intelligent decisions. And that's really the problem that we've been tackling. All right, surrender, I wonder if you can bring us inside your, you know, you talk to lots of enterprise customers and it helped share their voices to, you know, in this space, you know, not sure if they, they're probably not calling it AI ops there, but you know, some of the big challenges that they're facing, where you're helping them to meet those challenges and, you know, where science logic fits in. So certainly, yes, they probably won't talk about it that. They want to make sure that their applications are always up and performing the way they expect them to be. And at the same time, being responsive to changes because they need to respond to their business demands where the applications and what they have out there continually has to evolve, but at the same time be very available. So all the way from, even if you think about something as traditional as batch jobs, which you know, we have large processing of batch jobs, sometimes those things slow down and because now they're running through multiple systems and trying to understand the precedence and actions you take when a batch job is not running properly. As one, just one example, right? Then what actions we want, first diagnosing why it's not working well, is it because some upstream system is not providing it the data it needs, is it clogged up because it's waiting on instructions from some downstream system? And then how do you recover from this? Do you stop the thing, just kill it? Or do you have to then understand what downstream further subsequent batch jobs need to be, or other jobs will be impacted because you kill this one? And all of that planning needs to be done in some fashion and the actions taken such that we, if we have to take an action because something has failed, we take the right kind of action. So that's one type of thing where it matters for clients. Certainly performance is one that matters a lot and even on the most modern of applications because it may be an application that's entirely sitting on the cloud, but it's using five or 10 different SaaS providers. Understanding which of those interactions may be causing a performance issue is a challenge because you need to be able to diagnose that and take some actions against that. Maybe it's the login or the ID and management service that you're getting from somewhere else and understanding if they have any issues. So whether that provider is providing the right kind of monitoring or information about their system such that you can reason over it to understand, okay, my service, which is dependent on this other service is actually being impacted. And all these kind of things, it's a lot of data and these need to come together and that's where the platform, something like ScienceLogic would come into play. And then taking actions on top of that is now where a platform also starts to matter because you start to develop different types of what we call content. So we distinguish the space between an automation platform or a framework plus and the content you need to have there. In ScienceLogic, they talk about power packs and these things you need to have that essentially call out the workflows of the kind of actions you need to take when you have the following signature of a certain bundle of events that have come together and you've reasoned over it to say, okay, this is what I need to do. And that's where a lot of our focus is to make sure that we have the right content to make sure that our client's applications stay healthy. Did that get to, I think, build on what you were talking about? Absolutely, yes, you've got, it's this confluence of know-how and intelligence from working with customers, solving problems for them and being proactive against the applications that really run their business. And that means you're constantly adjusting. These networks, I think, surrender has said it before, they're like living organisms. Based on load, based on so many factors, they're not stagnant, they're changing all the time and thus you need the right tools to understand not just anomalies, what's different, but the new technologies that come into augmenting solutions and enhancing them and how that affects the whole service delivery cadence. Yeah, this is surrender, I want to give you the final word. One of the things that I found heartening when I look at this big wave of AI that's been coming is there's been good focus on what kind of business outcomes customers are having, because back in the big data wave, remember we did surveys and it was like, okay, what's the most common use case? And it was custom. And what you don't want to have is a science project. You actually want to get things done. So any guidance you can give is to, I understand we're still early in a lot of these deployments and rollouts, but what are you seeing out there? What are some of the kind of lighthouse use cases? So certainly for us, right, we've been at using data for a while now to improve the service assurance for our clients. And I'll be talking about this tomorrow a bit. One of the things we've done is we've found that now in terms of the events and incidents that we deal with, we can automatically respond with essentially no human interference or involvement, I should say, about 55% of them. And a lot of this is because we have an engine behind it where we get data from multiple different sources. So monitoring event data, configuration data of the systems that matter, tickets, not just incident tickets, but change tickets and all of these things. And a lot of that's unstructured information. And you essentially make decisions over this and say, okay, I know I've seen this kind of event before in these other situations, and I can identify an automation, whether it's a PowerPack or an Automata, which an Ansible module, playbook, that has worked in this situation before in another client. And these two situations are similar enough, such that I can now say with these kind of events coming in or group of events, I can respond to it in this particular fashion. And that's how we keep pushing the envelope in terms of driving more and more automation and automated response such that the, I would say, certainly the easy or the trivial kinds of, I shouldn't say trivial, but the easy kinds of events and monitoring, things we see in monitoring are being taken care of, even the more somewhat moderate ones where file systems are filling out for some unknown reasons. We know how to act on them. Some services are going down in some strange ways. We know how to act on them to getting to even more complex things like the batch job type of thing. I, example I gave you, because those can be, some really pernicious things can be happening in a broad network. And we have to be able to diagnose that problem, hopefully with smarts, to be able to fix that. And into this, we bring in lots of different techniques. When you have the incident tickets, change tickets and all of that, that's unstructured information. We need to reason over that using natural language understanding to pick out the right, I'm getting a bit technical here. You know, verb non-pairs that matter that say, okay, this probably led to these kind of incidents downstream from typical changes in another client in a similar environment. Can we see that? And can we then do something proactively in this case? So those are all the different places that we're bringing in AI, call it whatever you want, AI ML, into a very practical environment of improving certainly how we respond to the incidents that we have in our clients' environments. Understanding when I talked about the next level, changes when people are making changes to systems, understanding the risk associated with that change based on all the learning that we have because we're a very large service provider with essentially approximately 1,000 clients. We get learning over a very diverse and heterogeneous experience and we reason over that to understand, okay, how risky is this change? And all the way into the compliance arena, understanding how much risk there is in the environment that our client's facing because they're not keeping up with patches or configurations for security parameters that are not as optimal as they could be. All right, well, surrender, really appreciate you sharing a glimpse into some of your customers and the opportunities that they're facing. Thanks so much for joining us. All right, and Dave, thank you. We'll be talking a little bit more later. Great, thanks for having me. All right, and thank you as always for watching. I'm Stu Miniman and thanks for watching theCUBE. Thank you.