 Okay, my name is Bob Johnson. I'm with Humana. I have Mike Villager here with me from Dynatrace. I want to tell you Humana's story today about where we are with our digital transformation specifically around the operation space and some of the things we've done there, as well as what we've done with Pivotal Cloud Foundry. So sort of how have we united Pivotal Cloud Foundry and Dynatrace along with some other capabilities and tools within Humana and what has that done for us and what are some of the results that we've seen. And there's a story in here as well. So we'll get to that a little bit later in the presentation. Okay, in terms of agenda today, a few topics that I'll go through. I'll talk a little about the Death Star and the deck. And we'll talk a bit about that. There's definitely a Star Wars theme here for sure. And when we apply some of the technology specifically, talk about DevOps from a couple angles, from the acceleration of the pipeline and actually deploying our applications into the production environment, some of the things that we've been able to take advantage of with the Cloud Foundry platform for that. And then I want to talk about DevOps monitored and some of the things that we've done with Dynatrace from that angle. Michael talked a little bit about how monitoring happens in PCF. He's certainly the expert on that. I'll talk about Nancy's story. You'll get a chance to kind of understand Nancy a bit. And we'll talk about that. And then I'll talk about what the 110 plug is, which if you're from the US, you know, we'll probably resonate with you. If you're not, maybe we're talking about a different voltage, but the concept is generally still the same. Okay, what is Humana's monitoring strategy? We've really got four components that we focus on and that have been really key to us. We invested in application performance management about five years ago. We partnered with Dynatrace on that and have been using their products successfully for quite some time. It's been a gradual ramp up. We've spent a lot of time starting with the basics and starting with a small set of apps to the point where at this point we're very well covered from an APM perspective across our entire enterprise. I want to talk about engineering practice as well. And this is really important. Mike and I talk about the concept of a common language or a lingua franca, if you will, relative to development teams and operations teams and how a lot of the divide between those teams and a lot of situations is they don't have that common language. It's a different perspective. Operation teams obviously are motivated by one thing. Development teams are motivated by another. But what we found they can do is certainly use the same tools and the same pieces of software and those capabilities. So what we've really spent a lot of time on is investing in the performance side of APM and treating that as a discipline of performance engineering at Humana. And Dynatrace has really been a key tool for us in making that come to life. I'll talk about the IT command center. We have dubbed that the death star. And there is a story behind that around the naming of that that I can go into sort of offline. But suffice it to say our command center is the place that is situated within the application team space. So in one of our buildings, we have a command center where application teams can come in. And that might be application operations teams, it could be application development teams, et cetera. But more specifically, that's located in very close proximity to our executives. So they get a chance to come in and when we have a major issue or a major incident or have been, the executives are very nearby and can come and ask questions. They can see the monitors light up and they can really get a good picture of what exactly is happening in our environment. So really over the last probably five or six years, we've seen a dramatic improvement in our stability, our impacted user minutes, because of some of the practices and some of the tools that we've put in place specifically in this arena. And then lastly, another goal is always improve. And that's a big theme of this, the rest of the message is, what can we do to always improve relative to what we're doing? It's never good enough. So how do we make it faster? How do we make it better? And how do we make it easier? So that's the death star. I do want to talk a bit about the Digital Experience Center, so the DEC. The DEC is modeled from a partnership with Pivotal. Humana and Pivotal worked together several years ago to actually stand up a lab-like environment with development workstations, pair programming, lots of new practices, and Cloud Foundry was really a key part of that. The DEC's mission I've got on the screen, so I won't necessarily read it to you. And you've got a link that you can go visit to find out more information. But their mission is really around human-centered design, putting the human at the center, very specifically putting our members at the center, and thinking about their entire experience. So obviously in the healthcare arena that is recognized as a challenge. It's a difficult industry to be a member in, and it's difficult to have a good experience or a perfect experience. But what the DEC really does is try to focus on that. A big part of that is the ability to go fast, and the ability to iterate quickly. And obviously I'm sure most of you are familiar with Cloud Foundry and its capabilities, that's why it was built. A developer gets a chance to write code, and that's all they do, and they can push that code very quickly and they can iterate quickly. So when you think about the three ways of DevOps and the last way being experimentation, we really view the Digital Experience Center as a place to enable that experimentation in partnership with our business, and it's been very effective for us in that front. So again, from a DevOps accelerated in Cloud Foundry, Humana has invested in Cloud Foundry specifically as a way to go faster. We found that PCF provided, as I mentioned, a developer-centric platform for development, software engineers write code, and that's all they do. And obviously everyone's familiar with the CF push. You write your code, you CF push, and you're done. But again, it's been very helpful for us to leverage Cloud Foundry in that respect. We do still do other DevOps practices at Humana, and we use other tools as well, particularly for infrastructure automation, many of the tools that are out there. I won't really cover those necessarily in this presentation, but what we found for the right applications, Cloud Foundry has been fantastic as an accelerator, and it really is just a shortcut. So the things that may be in the past, you spend a lot of time building automation and orchestration to bypass to make infrastructure easier, Cloud Foundry just takes it completely out of play, as you know. I think the haiku that hopefully everyone's familiar with kind of sums it up really well. Here's my source code, run it on the cloud for me, I do not care how. So everybody hopefully is familiar with that. So I've come up with a different haiku, which is sort of a parallel. Oh, and by the way, monitor it for me too. Ops stuff is so cool. So a little bit of a bent to the dove and the ops side of things, right? But I feel like clouds don't just have to be for developers. Operators get a chance to kind of think about clouds and cool haikus and those kind of things too. So what we've seen is Dynatrace has been really key for us, specifically with PCF. We spent some time last year working with Cloud Foundry. We looked at Dynatrace as really an opportunity to do full stack monitoring. So Dynatrace is not just Atmon and some of the tools that we've always used in the past. It's also a very modern containerized workload capability management system. And what we've found is it gives us tremendous visibility into the full stack. So we obviously can't get all the way down to the hardware level to be able to monitor at that level. But anything from the host level on up has been fantastic. And when we talk about feedback, this is a piece of that. Having that feedback capability from a system feedback or a machine feedback perspective, looking at our log data, Dynatrace has been fantastic. We can look at the host. We can look at business transactions within the applications, how applications talk to each other through APIs or services. As we start to look at microservice architecture, what does an application look like relative to the microservice architecture? As we work our way up the stack, what's the user experience? And are users using the features that we thought we would use or that they would use? The things that we've invested in that maybe we thought was going to be a home run? In some cases, we've found that users don't necessarily use what we built. So when you think about that experimentation and really tightening the feedback loop, this has been critical for us. And it's a one-stop shop to be able to do that. It's not just application performance monitoring anymore. It gets right into are people doing what you expected them to do, and if not, okay, then let's flip back over to the Cloud Foundry environment or our developer environment and do something different. Okay. Mike, I think you wanted to talk to this one. All right. Make sure that I hit the right on switch. Awesome. All right. So hopefully everybody was able to see that really great animation that my fine marketing folks created for me. So obviously, once again, so to reiterate on some of Bob's great comments, what we're looking to do here is weave Dynatrace into the fabric of the platform itself to help make Dynatrace the tooling to provide that language that they can all use to enable those feedback loops. And so what the animation is indicating here is what Dynatrace has done, we've created a Bosch add-on that's going to drop the agent on every single one of the hosts that represent your Cloud Foundry environment, and then we natively understand the container technologies that are utilized by the platform. And one of the cool things here that's actually really relevant for Humana, Humana is kind of leading the charge when it comes to running.net workloads on PCF. So I find that that's actually one of the really kind of neat and innovative things that Humana is doing. And we're able to seamlessly instrument both the Linux pieces as well as the Windows pieces and really managing the whole thing with Bosch. Right? Is that cool? Absolutely. So I told you I would talk about Nancy and I do want to spend a little time talking about Nancy. So I guess the first question is who is Nancy? Is Nancy some cracker jack straight out of high school software engineer? No, she's not. Is she a strong site reliability engineer who's really a key part of our APM solution? She's not that either. In reality Nancy is a framework within PCF and many of you may be familiar with the framework. We use Nancy as a foundation for building our .NET applications. The story with Nancy is an interesting one because from my perspective it's a cautionary tale of a transformation in general, but not so much from a software perspective, more from an operational perspective. In October we had a member-facing application during really one of our critical periods of the year, our annual election period, that experienced a failure. And we started to troubleshoot that failure. I've shared a couple screens that we used at the time. The top screen is really a screen that shows DC-ROM, which is one of the Dynatrace products. And it's really a critical screen for us because it shows us trends in the environment and experience and across many different applications. And then the bottom screen is a picture of the Dynatrace product, the more modern product that we had instrumented with PCF. So what happened is we spent our time, our ops teams would engage in the desktop, everybody poured into the room and started troubleshooting the problem. When we sat down we very quickly saw 500 errors. So in a service oriented architecture, which many of you probably work in, it's very common to think of a 500 error as a problem with a dependent web service. So you've got a situation where you're calling a web service and that web service isn't responding. So the first thing you do is you go find the owner of that web service and you track them down and say, hey, what's going on? There was this error message that Dynatrace showed us specifically around Nancy that we kind of discounted. We said, yeah, we got it, but it doesn't really make any sense to us, so let's move on. In the meantime, we pulled out our runbook for our application recovery processes, spent a lot of time sort of going A to Z through the runbook, trying everything that we could and to no avail. So we tracked down all the dependent service owners and we get them in and everybody gives a thumbs up that their service is innocent, if you will. Sure enough, by the time we got through with that, the only thing left was let's go back to the basics and let's see what Dynatrace tells us. So when we looked at the Nancy issue again, we were able to actually troubleshoot that, which if we had started with that to begin with, we would have been spot on and been able to actually restore service very quickly. The root cause actually ended up being a problem with the back level version of a build pack. Once we came up to the latest version of the build pack, we'd recently migrated to a new platform and had not picked up for this particular application, the proper build pack. Once we did that, we repushed the app with the right build pack and it recovered very quickly. But again, that's not a technology tale as much as it is an operational tale. For those of you who are attempting to scale your company and your enterprise, you've got a lot of people that are managing your operation space that are supporting your business. I think it's important to understand that you can't just expect those same people with the same processes and same run books to be able to do the same thing that they've always done given some of the more modern environments and tools. So the last slide, and then we'll open it up for some questions. What is the 110 plug and looking ahead at that? So the concept of the 110 plug is very basic. It's to make monitoring easy. Our intent with the 110 plug is for Dynatrace and for some of the other monitoring tools that we use to set up patterns and references such that we can put those in the hands of our engineering teams and that they can deliver what is needed to build these monitors in production. So today we have a blend of an engineering team that does that and a separate operations team that does the instrumentation where we're really trying to drive to as the concept of monitoring as code, where everything is codified and it becomes a part of your repository where you're storing your code. And then you use that to push your code as well as your monitors to production. We're planning on continuing to use our Enterprise PCF environment and Dynatrace for other cloud environments as well. So as we look at public cloud and some of the capabilities there, we'll look at what Dynatrace has to offer. I've already talked about putting Dynatrace in the hands of the developers and the software engineers and that's absolutely still part of our strategy. So again, don't look at this as just a monitoring tool. Look at it as a performance tool. What the tool can do is actually draw lines within your applications and define business transactions that other things can't do and when you put that in a microservice world, it becomes even more powerful. So it's not this chaos or this sea of all kinds of things trying to talk to each other that you have no visibility into. This gives you really full visibility into what's happening and lets you troubleshoot. And then lastly, integrate that with our DevOps and our continuous delivery pipelines. Like many of you, we are reworking our systems development life cycle from a process angle and we're rethinking the way that we do things, shifting kind of from more of a project to a system-based approach. And a key part of that is really what can we do to integrate Dynatrace, other monitoring tools into that pipeline as well. So these are really the components of what's next for us and the things that we'll be focused on over the next several months to make come to life. And that's everything I have for this part of the presentation. Just wanted to open it up for general questions. And I've got the regular handheld mic here so I can walk around or quite frankly the acoustics in this room are actually really good. So I'm pretty sure I can probably hear you if you have any questions. I think he's saying it's probably better to use the mic. All right, awesome. So are there any questions? All right, I'm going to move down then. Yeah, a couple questions actually for you. First one, are you guys using the offline Dynatrace managed? Yes. Okay, and? Yeah, on-prem, on-prem. That was question one. And then the follow-up question of that is you referenced both your application teams having access to the console as well as operators and others within your group. How are you guys doing the delineation of access between those different groups? Yeah, we still have segregation of duties. So it's not so much and we have a security model that separates those things. So it's not so much that our developers have full access to production or anything like that. We still control the access both within the Dynatrace world and the PCF world. So I actually work really closely with Bob on a lot of this stuff, hence why I'm on upstage with them. I think one of the things that we talked about very recently when I was on site there was kind of talking to you guys about the functionality with management zones. So management zones, the whole concept of management zones is to enable some of the delineation that you're talking about so that you can kind of have some separation of concerns that's tied into the accounts. So the devs will see the information about their applications and then like Brian and some of the other folks on Bob's team would have visibility in the environment in its entirety, right? Because I think that was kind of the gist of your question, right? Are you guys also fronting it with Samu for the front-end for authentication? Yeah, we definitely use an SSO and SSL. I'm not sure that we use Samu, per se. I don't have the full details on the security pieces. Any other questions? Everybody awake? Everybody's ready to kind of head back home or maybe enjoy a weekend out in Boston. All right, awesome. Well, it was a pleasure to come and talk here with Bob again. It's always good to see my fine friends at Humana. And likewise. Awesome. All right, well, if there's no more questions, we will adjourn. Thank you so very much.