 Live from Boston, Massachusetts, it's theCUBE. Covering Red Hat Summit 2019, brought to you by Red Hat. Well good afternoon, wherever you might be watching us here on theCUBE, we are live in Boston as we wrap up our coverage headed toward the home stretch you might say of Red Hat Summit 2019. I'm Juan with Stu Miniman, I'm John Walls and thank you for joining us here. We're now joined by Alois Wrightbauer, who is the vice president and chief technical strategist and head of innovation lab at Dynatrace. And Alois, good to see you today. Thanks for being with us. Hello, thanks for having me. So software intelligence, that's your primary focus. You've got headquarters here in the Boston area, back in Austria. Tell a little bit about it, you would, Dynatrace and I guess first off, what this news this week has meant to you in terms of the releases and then maybe what you're doing in general. You know, what Dynatrace is all about? Yeah, so Dynatrace has been around for like quite a time. Started out as an APM company like 14 years ago. Have been reinventing ourselves over and over again. And so we moved from the traditional monitoring approach. So the innovation we had in the very beginning when we launched the first product was really what we back then called PurePass. So the ability to trace end to end. Now we hear a lot about tracing, like becoming super cool for microservices. So it would be like the first T-shirt we could be wearing, doing tracing before it was cool, like 14, 15 years ago. And then obviously we're evolving the product more and more, scaling it to bigger and bigger environments. So what does bigger and bigger mean? I remember in the beginning, when we were working on environments we were talking about like 100 hosts, that's a big environment. I was like, okay, 500 hosts, that's a big environment. Today we say for even 100,000 hosts, okay, it's a big environment but they can't get even bigger. Then the massive change was really for us five years ago where we implemented our entire product offering, built the new Dynatrace with the focus that we realized that okay, it's data and showing people data and having them analyze data is nice. But it's only getting you so far. So the more complex your replication, the more data you get to analyze. And it's just more as exponentially scaling how many people you would need to deal with this. And that's why five years ago we started to incorporate AI into our new core platform then for automatic problem analysis. That's also why we said we're not just APM, that's just what we call like the DOG tools, data on glass tools. They show you a lot of data. Do some analysis on top of it, but it don't help you to really resolve a problem. So we used built-in AI engine that automatic root cause analysis. Again, the next teacher doing AI ops before it was cool, like five years ago. And in the latest evolution, we also saw again another change in the way people are using monitoring tools. We've invested a lot into building out an API. So we don't see monitoring tools like being the monitoring tool here and the application over there, but having the monitoring tool being tightly integrated into the fabric via APIs. So we have as of today 80% of our customers are using their product also via APIs by tying them into operational automation. What we heard even today in the keynote here about AI ops and how AI ops starts to control and manage a platform, more or less becoming the intelligence or the backplane behind a modern cloud native stack. So we had Chris Wright who was in the keynote this morning came on our program this morning too, and we talked about just the rippling effects of distributed architectures. By looking at my applications, they're going to micro-services architectures. Where's customers data? Well, lots of stuff all over the clouds and SaaS, and that has a ripple effect to your space. I hear observability, monitoring, hack even bring up the serverless world. It becomes a whole separate meeting. So, Dynatris has been going through a transformation. Give us a point check as to where your customers are, how you're helping them move through this modernization and move to distributed architectures and where that fits in. So, the customers we focus on mostly are like Fortune 500 customers we work with and obviously they have everything that exists on the planet when we talk about software, like even from the mainframe to cloud native to serverless as you mentioned here. And they were in this transition process right now like modernizing their applications, which as a necessity, we all want to move faster. We want to have more flexible architectures. We want to build more innovative products. But at the same time, they realize that there's also a massive business risk behind following this approach. Think about you in the role of a CAO and say, well, we're going to modernize our architecture. We're going to rebuild everything, re-platform and so forth. If you succeed, everybody would say, yes, you did what you had to do. I mean, sorry. If you fail, you failed. So for them, it's a big risk to move down that route and be tired to take that risk out of this process as much as possible. Really starting obviously with monitoring their traditional stacks as they have them today, but really supporting them along that entire journey to a cloud native architecture, starting with what we refer to as our support for monoliths to microservice architectures. So the idea is basically, you don't want to rip apart your application and figure out how it's going to work in a microservices world, but we have this technology that's called SmartScape. SmartScape monoliths builds a real-time model of your entire data center and all the applications running into it. And then it can most virtually dissect your monoliths, see, okay, how would they look like in a microservices architecture without touching any code and then making it work? So once you've done this, once you've decided to move there, the next step obviously is you kind of rebuild that application. Usually we see applications with microservices architectures being significantly more complex or more distributed by design than a traditional app. You might have web server application here, database server. Now you might be talking about maybe 200 microservices or more. So like the 20 times range is rather on the lower bound here, which means that your traditional operational approach of okay, it's either the database, the web server, the application server doesn't work anymore. On top of this, you did all of this to deploy fast, like go for like bi-weekly releases, even maybe daily or like a smaller granularity. So you're adding a lot of entropy to that system. And you have to analyze way more data than you ever had to do before. And this is where we kind of getting to that level where theoretically humans could do it, but it would just take us too long where the whole AIOPS capability come in and where we let the machines let a monitoring tool take care of it at that level. At that level, so we're helping them to operationalize these processes and then really supporting them along that whole journey where every customer who we talk to has like this vision what we also heard today in the keynote of an autonomous cloud. And with Kubernetes, we all want to make a great step in this direction, looking at the infrastructure layer like today, say I need five replicas of this container. I don't know how Kubernetes does it or OpenShift and specifically here, it's going to happen. But if we move to the application layer, there's a lot that has to be done and it has to make it easier for people to do. And that's where we tie into the entire customer's ecosystem to automate like their cloud environment and have actually build the practice around which we call autonomous cloud management that we have been working with customers on to enable them to achieve this over time. Obviously it's going to be a long journey there. Yeah, so I mean, so you talked about that, ACO, autonomous cloud management, what exactly is that? And how are you bringing that to your customer base? Autonomous cloud management resulted out of two different areas. The first one was when we were re-implementing our platform, what I mentioned before, one step for us was to move to the SaaS platform. And we looked at all the operational practices that were around back then. We didn't know we don't want to build a knock. We really don't want to do it. Like having people 24 seven look at dashboards, then going to a Wiki, then reading a description of how to fix a problem. If you're an engineer, why do we do this this way? It doesn't make any sense. So we developed our own practice which we referred to as no-ups. Well, no-ups doesn't mean that you're not doing operations. That would be pretty crazy, but not doing this traditional knock type of operation sitting there staring at a screen 24 seven and then manually executing any operation. So we had our own practice that we built around it. And quite frankly, we just built it because we needed it for ourselves. And then we kept talking to customers and part of it. Hey, so it's really cool what you did there. Like, oh, how did you do this? What's like your software stack behind this? And what are the practices? What are your processes? What's the culture change? So we were engaging with some customers and then we were seeing that some of our customers back then even were doing bits and pieces of this as well. And we thought, okay, there's a lot of practice and a lot of knowledge around how to do autonomous cloud management. And at the same time that we talked like the other customers who are not yet on a journey who definitely want to get there, but are not quite sure how to do it and they don't want to figure it out themselves. So we thought, okay, let's take all of these best practices that we have and build more or less a methodology around it. How to make this actually work, like how to do this. We really broke it down into like individual sprints. The distance sprint one, the distance sprint two to really have the results within three months, six months, 12 months, whatever the pace is that you want to run on. And then we realized talking to customers, this by itself isn't still enough. So that's why we started to open up this to an entire ecosystem. So we brought ecosystem partners along, like working closely with red, a lot of other companies, but also system integrators who can help us with bigger projects because we as a company are a software company. So we're not a services or consulting company. We do support customers in some of those engagement, but if you think of like a really fortune 500 company, that's a multi-year project that will keep hundreds of people busy. So to recap, like build the methodology, we built the ecosystem to deliver on that promise at scale. And now the last step was we, as we were doing this, we also built like a reference architecture for it. And it was just an internal idea. So how do we like structure this, build that reference architecture and then realize, okay, it's actually kind of like super helpful for customers. So that's why we then decided to open source this reference architecture, this fabric as well, to like the entire software community so that they can also use it. So technically it's really these three pieces. It's the methodology, it's the ecosystem, and it's like the reference architecture that you can work with to help you achieve that goal. All right, tell us how your AI fits into this. I've heard some analyst firms are saying, some of the next generation of your space could be AI ops. Do you consider yourselves moving in that direction or do you have some counter-view on that? I think today a lot of things are AI ops that might not be AI ops and it's still a very undefined goal. And as I mentioned earlier, we decided to have AI based algorithms as part of our platform five years ago and nobody back then was talking about AI ops. Funny story, some of our competitors even told us you can't use AI for monitoring, just like I told you, but then they bought other companies that they were doing it. But again, so the whole industry is learning here. I think it's really about data analysis. If you look at, if you scale the bigger and bigger environment, you really have to look at the process of what the human operations people are doing. And there's obviously some hard decisions that you have to take. You have to work with teams to resolve hard problems. But the biggest portion is really data analysis and interpretation, right? And a lot of this can be put into an AI component that does it and what the Dynatrace AI does. It more or less is like your SRE in code, so to speak, which is able to find what's broken in the application, what was related to an issue in the application and being able to automatically find the root cause. Very importantly, we are kind of like opinionated on how an AI for operational practices should be working. Because one thing you don't want to do is you don't want to have an AI op system tell you, well, we should restart this service because some neural network told you to do so. That's not building a lot of confidence. That's why our approach is really to follow like what we call a deterministic API, sorry, an AI that is able to explain back to the user why it came to a certain conclusion. So why should I restart this service? Why should I roll back this deployment? Or why does the AI believe that if I fix this problem, then like the bigger problem will be solved. So that's our approach to AI ops. We started roughly five years ago, even a bit more than that money. And I think I have a lot of experience really rolling it out at scale and seeing it really helped people because the ultimate next question we didn't always got was if you already know what the problem is, why don't you fix it? And that's exactly the conversation you want to have. Maybe just to briefly add here because it usually comes up, okay, you have AI and is it replacing people's jobs? I don't think so. We also heard it in the keynote today from Chris. It's augmenting our capabilities. There's hard decisions that you have to take but just going through tons and tons of data is not going to result. Very often when we talk to the operations team or almost every time, first of all, you can't hire enough people anyways to get all done that's on your plate. Secondly, just by the amount of data and the time to react, it's just longer with the human and standard scenario. We do this demo on self-healing of an application where we deploy something broken into production and have it being rolled back and we can do it in 51 seconds. No human can do it that fast. That's just what pure software automation can do for you. I think that then you can focus on other areas that are more important, new projects. I always ask people in the off-space, what are the three projects that you want to work on and you never have time to work on? And usually they come up with the list and say, yeah, this is what, we give you back that time to work on exactly those things that move your business forward. You said 51 seconds. You've never seen Stu in action. Stu, I have a lot of confidence in you. Well, we love the machine-enhanced human intelligence. Which definitely we could all use some machines to help us all get away from the drudgery and be able to do more. I'll always say, safe travels. Thanks for being with us. Headed back to Austria. Let's say hi to all your folks back in Austria real quick. Hi there. I'll always say, on his way home, on his way to the airport, but thank you for being with us here on theCUBE. I appreciate the time. Our coverage continues here, Red Hat Summit 2019. You're watching theCUBE.