 I'm really excited to be here today to talk about what we've been doing at Health Partners for Continuous Delivery. It's been great to hear all these talks and feel like I'm in a place with fellow comrades as we go through this journey of OpenShift. Just hearing this last talk of the challenges and the journeys that they go through, we go through many of the same things and face many of the same challenges. So I work at Health Partners, which is a large regional non-profit health insurance and health care provider in the state of Minnesota. We don't have offices in San Antonio or Plano, which would be nice right now. But we're a large company. We were formed in 1957. We have over 26,000 employees, more than 90 clinics and hospitals. This is a very large organization, very well known in the Midwest. I'm a part of our platform and architecture group, which was formed out of our web team, the team that ran our healthpartners.com website. And this group has been attempting to push the envelope here at Health Partners, grow our cloud practice, and develop some new techniques for how we do our work at Health Partners and how we serve our customers. So I want to give you a little timeline, a little context to the work we've been doing so you can understand later as we look at the data from what we've done here. We began our evaluation for OpenShift around the time I started this company back in November of 2016. We made a decision in February of that year and handed that off to our leadership. And in the end, OpenShift was chosen to carry out an implementation. We were given a date to get to production by the end of the year and we were really excited to be able to release our first applications into production in November of that year, beating our release date by six weeks, which is really exciting. We took some time there to develop our day-to-operations practices. A lot of the things I'm going to talk about today come from that time of really focus energy around how we're going to manage applications as we go forward into this platform. And then over the last six to eight months and going forward in the future, it's been a challenge on how are we going to bring our whole legacy systems, everything that we have, modernize it and learn and think about new ways of doing things. So we started with just basic REST services, API services, got our first web application into production in August of this year. We had 50 services in late October and we finally got our security approval for getting external internet traffic into our clusters just this last week. So there's been a big effort over the last six months to really develop our capabilities in this platform and that's what I'm going to talk about. So first I want to contrast our new world, what OpenShift provides with the way people used to work at Health Partners. I like to think of this as the very high bar that it took to do really any work within our platform. Everything was challenging. We had large shared environments where web applications were deployed. Single line code changes to production would take 26, 36, three days, five days. This is just for a single line. Teams deployed to production once a month, once a quarter. I'm sure many of you have lived in this world. This is the standard world of large companies in the past and still today. Health Partners was particularly good at this old world. I hear stories of teams that would bring two feature films to the office starting at midnight as they deploy to these large shared environments, debugging errors as they came along the way, would go home as people were showing up to work the next day. And this is the kind of horror that I was hired to eliminate, I wish. But this was really the challenge that we faced in our environment and it made teams make some bad decisions architecturally because of the challenges in this old world. This is where OpenShift was our clear decision. OpenShift is a platform that our security team could agree on, our architecture team could agree on. This is something where we really gained a consensus around working with Red Hat as a partner and we were able to bring this product in-house and start to work with OpenShift. So as we began our implementation, we decided to think about the challenges of our old world and reshape that mindset. What should things look like in our new world? What should developers' lives look like? And we came up with a new mental model for how things should work. And so this mental model meant we made some key decisions. There must be a low barrier to entry to the platform. Has to be easy to do the right thing. Has to be easy to experiment and learn. And then it has to as well be easy to consume the services on the platform that the platform exposes for people to consume from that point. Things like secret management, configuration management, all of those things. Now as we talk about the difference between a shared collision-heavy environment versus an environment where we can turn over control to these development teams, we really started to figure out how can we give control of every part of their workflow to the development teams so they can make the right decisions for their business partners, for their users. So we developed a tool chain around this. I was laughing at the last presentation as he continued to mention components that we use as well, ServiceNow, GitLab, Artifactory, it was wonderful. Really great tools to use. And every company is going to have their own tool chain, right? What's key is exposing services for developers to consume in automated fashion, exposing the services so they can consume and build their own workflows around this. So what we did, what we decided to do is invest heavily in the connection between Jenkins and OpenShift, as well as start a conversation very early with our auditors around the processes of change that they have in place right now. And those two decisions, turning over control to an automated workflow and then getting in early and discussing those things with our compliance and our security groups is what I want to talk about going forward here. So we've talked about this low bar to entry. What does the low bar to entry look like for us? Well here's a Jenkins file that can deploy an application in our environment to production. There's a lot there. Think about how hard it might be in your old systems, your old environments to define everything you might need, right? I can pretty easily come up with an application name and the team I'm a part of, right? And then we can go from there and gain more context and gain more complexity and skill, right? But this is what I'm talking about when I'm talking about a low bar. We created a Jenkins entry point that we call our HP pipeline. We work at health partners, so everything starts with HP. And so this became an easy way for teams to come on and start learning about how the platform works and how their applications work on top of the platform. So from that development viewpoint, how do we go forward and actually carry out validations? We're not just slinging code to production, right? That can be the, you know, taking this model to the extreme where, oh man, you know, everything is going to production. We're, you know, pushing code all the time. We're doing great. The platform is happy and then you start having a terrible user experience. So what we did was we defined what we call the six stages of our pipeline. And if you've ever worked, you know, if you work in a large organization, you know that each team has their own use cases. I heard a wonderful talk this morning. I think it was by GE Digital where it said, you know, every team thinks their application is this special little thing, right? And so every team is going to ask for customization in potentially every one of these areas, right? The preparation of the application, creating the build, running tests, doing some sort of pre-deployment validation, deployment, and then validating the actual application into production and then repeating those final three steps. So what we've done is we defined these six steps in an abstract way within our Jenkins pipelines and then allowed, well, we first of all created defaults like we do for everything and so that, you know, every team has a happy path they can consume. But as teams want to consume a more complex workflow, we allow them to participate in the building of their workflow where they like to add complexity. So what I want to do is drill into each one of these stages and talk about how this connection between OpenShift and Jenkins has been key for us and talk about some of the amazing things that the OpenShift API has allowed us to do within our build and deploy system. So when we think about building, you know, within this OpenShift world, right, what does a build mean? A build means I have a source control system and I want a validated container image. Now, validated means a lot of different things to a lot of different people, right? There are security groups for whom validated means I've passed all my security scans, I've passed, you know, I'm good to deploy and I'll never have any vulnerabilities, which is wonderful and fiction. I mean, you have to be honest, right? But then for application teams, a validated container image means that they know how their application is going to work before they hit integrated environments, right? Before you start affecting other people's experience within, you know, a testing platform for your application. So how our happy path for our build stage works is we are a Java shop. We've been a Java shop. So our happy path is a Maven build into a spring boot application based on, as of a couple of months ago, a Java 11 base source to image build and then packaged up and pushed out to an external registry that we keep external to all of our clusters. But it wouldn't be a validated and, you know, application teams wouldn't be happy without really solid testing. Martin Fowler has this wonderful article about testing pyramid free application, where you really invest heavily in fine granular tests early in the pipeline stages and then have smaller but more targeted smoke tests and end-to-end tests as the application moves forward. So as a Java shop, right, we have capabilities for running unit and component tests, you know, spring boot tests, component tests. I don't know if, you know, any of you are spring people. We're using in-memory databases to load things in. A number of different database testing strategies so that before our application even hits an environment, we validate it as much as possible, right? We need good validation early in the process so that we're not deploying junk code even to our development environments, right? We want to validate this as much as possible. Now, after we've tested, we have a real validated container image. We start to push the application up to the stack. The next stage is our pre-deployment. Before we deploy an application to any environment, we carry out a set of steps that validate that that application should be deployed into that environment. Now, there are simple things like manual approvals, right? When you're doing continuous delivery, you can wait between environments, wait before production so that your business actually wants to release it. And that's a type of pre-deployment check. But there are many more powerful things that the OpenShift API can establish for you. We do things like cluster health checks, and we've turned them into very complex cluster health checks so that we can do things like patch the CVE that came out this last week during the business day while teams were deploying to production. By the time our CSO contacted my team and asked, hey, what are you guys doing about this thing? We said, yeah, we've already got a plan in place. We're pushing it up our environments. We'll have it deployed in production today. It's just once you have built very solid pre-deployment checks, you can do this type of work during the work day. You can do it live in production because you know that you're not going to affect the development teams that are working on top of you. So then, again, I just want to stress how effective the service now, the OpenShift API will get to service now in a moment. The OpenShift API has been for allowing us to deploy our code effectively. So as I've said in each of these stages, we've got this built-in happy path. And what we do is actually use Jenkins to generate OpenShift manifest objects and apply those into the environment. We've learned along the way about removing annotations and things like that, but we have this happy path so developers, as they come onto the platform, they don't actually have to know anything about a Kubernetes YAML object, a deployment config, a service, a route. All this stuff just happens for them. It gets generated based on the defaults that happen from the pipeline. So this again creates this very low barrier to entry. And then as teams gain complexity and gain understanding, they can start participating in the construction of their applications on OpenShift. So we've developed many different types of deployment strategies where they can provide additional manifest objects. Things like a headless service, very standard things you'll apply out with your application or a PVC request. Anything like that can be added along with your application. And as OpenShift has evolved, we have evolved with it. So that means that we've been able to have teams deploy. We started with OpenShift 3.3 about a year and a half ago. Then we developed most of the stuff against OpenShift 3.5. But then by the time we had development teams coming on, we had people asking us about using stateful sets, using some daemon sets on node selectors. And we were able to enable that method as well just by tuning some parameters, say where we'll pick up these objects from, and tying those things together. But those things don't happen when you start coming onto the platform for the first time. When you're first experiencing OpenShift, you don't say, I want to deploy a daemon set to a set of nodes that's doing this, this. That's a complex ask, but it's an important thing for you to support within your development workflows. So eliminating that barrier to entry has allowed people to come in and learn, and then start being able to participate on the platform. And then after we've deployed, of course, we verify. Start checking off the boxes that the application is doing what you want it to do. Again, this is where the tie between Jenkins and OpenShift has been really helpful for us. We use a number of the plugins that are deployed out on GitHub. We've helped contribute to some of these. And as well as running code scans, integrated tests that we pick up by convention within the application repository, that simple pipeline file that I showed you at the beginning would actually automatically look for a test directory, pick that up, send it a bearer token, and attempt to run tests against the OpenShift proxy API as new pods are spinning up. These are all just default behaviors that happen out of the box that teams can get without any cost. So this verification allows them to participate in building good tests against their application so they can validate before they go to production. Now, this strategy as well has allowed people to learn, participate with us in growing this practice. We had a team for whom this proxy API wasn't good enough because they wanted to validate within a browser. And at Health Partners, we have this wonderful event that we do twice a year. We have a Dev Days event. And at the end of the Dev Days event, somebody reached out to me and said, hey, I got a pull request for your pipeline. We're running a Selenium Docker image ephemerally. Every time our new pod comes up and we put that in there as a second option for your proxy API. And they did that for their application, but now other teams are consuming that as well. And so exposing these things allows teams to gain that complexity and write things that help them and then develop capabilities that can be spread across your organization. Now, within any large organization, you'll have a change management system. And within many of your organizations, I can assume your change management group is a little standoffish. Maybe you have a wonderful group. I'm not talking bad about any of your compliance folks. I really, they're really my friends now. But it can be difficult to interact with these kind of change management systems, right? We have service now in our environment. And we use service now for everything when it comes to an audit perspective, right? When we have external auditors come in, they start from service now. They start asking us, you know, validate that you did everything that you've said you've done for this change, right? So that means things like testing results, work tracking information from JIRA, get committed information, scan results, deployment time, deployment results, approvers. They want all this information. But within the old world, it's hard to manage that when you're doing these long drawn out processes that are fraught with manual errors, right? And so what we transform this into is rather than the system that's locked down that is hard to access, we started a conversation with them and we turned our change management system into more of an engineering journal. So the service now has this wonderful API, table API. Basically, if you've ever used the tool, anything you're doing in the UI of service now can be done over the API. So we worked that group to just expose that and expose what's called the standard change request process which allows us to build templates for standard changes. So just like you would do in any engineering journal, you'd have your experiment, you write out all the steps that you're doing, our change management processes look like that now. When a build starts going into production, it starts recording and sending information that's happening from the pipeline into our service now tool. That's things like managing these tables of change and tasks, updating the state of when things actually happen. We attached files that show that it was tested in these different environments. We attached the full build log if the build was successful or if the build failed to the service now change. So this provides for us a simple entry point for making all the necessary API calls between our pipeline and the change management platform. So after doing this work, this is where we really started to think about using OpenShift and using this platform that we built to start to change the way that our organization thinks. Technology, I don't have time to show you any of the code. Last slide, there'll be links to where code will be. But code doesn't transform an organization. A number of people have talked about this today. Processes and the way people work is what moving the way people think and work is the thing that can really move the needle. And so one of the things that we did here is we really fought for a capability to participate in the open source community. This is something that health partners that never participated in before and we were able to convince our legal team, yeah, like a service now API interaction thing, it's not a core business functionality for a healthcare company, right? So they really agreed with us and we laid out an agreement for how we're gonna do this stuff and so we published the service now Jenkins plugin which is now being used by at least six other companies. So that was a really exciting thing for our team. So through all this work with the pipeline, lowering the barrier to entry, what have we done in even just the seven months that we've been actively bringing people onto the pipeline, actively bringing people onto the OpenShift platform? We can get from commits into production, doing all these automated processes in 18 minutes. We've done this multiple times, proven it out, showed our security team, our compliance team, they're all good with it and so a single line code change has gone from 2016 taking about 36 hours to get to production down to 18 minutes. In just seven months, we've now gotten to the point where application development teams are deploying to production 10 times per day. Now this is absolutely unheard of in our organization, right, I mean John Ospo had that wonderful talk back in 2009 and I'm just happy to be there nine years later to actually be getting, production changes recorded 10 times per day. And now we record everything that's going on in this pipeline. So we have over 300 actions that occur every day and these actions now start to create a feedback loop within the pipeline, within the OpenShift platform, right? Once you start knowing what people are doing, how people are interacting with the platform, you can start visualizing how people are working and you can use that to create a positive feedback loop to measure teams or more effectively measure how well you're doing what you're doing. So this means we can slice and dice the data however we want. This is a visualization of the production deployments from the third to the fourth. See we're using Red Addison, I saw a product there. So you can see that it'd be really easy, hey if you're a team, hey I can visualize just my production deployments or I can visualize all the deployments for this application, right? Once you're recording everything that's happening, you can basically own this data and really use this data to help you understand the system. And then as I said, it really starts giving a feedback back to this platform team. We use this chart which is just our number of deployments to any environment during the day to be a feedback mechanism for our team. Now we're not quite concerned that people didn't deploy over Thanksgiving but that little dip in November was actually something that caused us to go out and interact more with some of our development teams and say, hey what's going on in the platform? Are you having some concerns? And you can see that since then we've really jumped back up and we've had a little more engagement since then. We didn't have a mandate from upper management to do this work across our whole platform. So it was up to us to create a platform that was enticing for people to come to. And data like this allows us to make sure that we're accomplishing that goal. Now these pipelines with an open shift using the API in this way have created for us a very flexible approach. Now in the past these large shared environments were very difficult to operate and manage and we didn't have a central automation mechanism. Now that we solved this for applications we took that and we started to apply those same principles to how we automated other types of processes. So now we use these pipelines to manage our config maps and our secrets on an open shift. We use this for administrative actions like team onboarding, namespace creation. We also use that these pipelines to manage our open shift infrastructure. We orchestrate Ansible tower jobs through Jenkins pipelines that manage open shift run. That's how we were able to kick off that patch for the CVE basically immediately. And we've even gone back and started to manage our old J2E environments using these pipelines as well. It started to just create a way that we can manage basically everything through these validated standard approaches that really provide us that great data. So just a couple of notes. I talked about how this flexibility has allowed us to allow developers to consume new features within the platform. And that's what my team is really looking for. How are we gonna build for the future? How are we gonna allow teams to do more complex things as the platform develops alongside of us, right? Istio10 came out, we're looking at Yeager. We're looking at all the wonderful cloud native computing foundation projects and seeing how they might be able to assist us in building really good applications that are serving the needs of our business. And so building, it has allowed us to really build new capabilities for very low cost. And I'm out of time, but this open source participation has just been something that has helped transform the organization as well, right? We're in an early phase of this, right? I think when you're at an early stage of open source development, you start to push code out. And we're still in that phase where we're taking code and just exposing what we've done. But one of the things we're really looking to do in 2019 is to bring that feedback loop back and have our developers contribute to the projects that we all use, things like Prometheus, the OpenShift Origin, participating with Jenkins plugins that we're using, and as I said, all the wonderful CNCF projects that are out there. So if I had to boil this down into six points, the last one, hey, we're hiring. You wanna come live in Minnesota? It's great. It's warm there, right? It's not warm, no. Open source is wonderful, audit and compliance are your friends, bring them into your processes you're developing in your automation. And supporting testing is critical and OpenShift is great. So thank you all very much. This last slide just has some links. Can I ask a quick question? Can you talk a little bit about how you dealt with your legal team to allow, get permission to be contributing to open source projects? Yeah, one of the things that we did, you know, we had some very specific examples of things that we wanted to open source. So you'll see two of our projects here, the first two, the service now plugin, and then we've open sourced a library that has allowed us to unit test everything that we run inside of Jenkins. So these two things, we kind of wrote out, what is this code doing? Why do we develop this code? And talked about how it's not really tied to any business process. And so sitting down with that, we had a couple leaders really champion this cause and say, yeah, I think it's gonna be important for us. So we got kind of a coalition of people that could go to our legal group with the unified front and the unified statement that says, this is a normal process that other companies are participating in. And found other companies in our sector, in our area that are doing the same thing. And in the end, they were agreeable and they were down with it. That's really great to hear. Thanks. All right, any other questions? Well, we've got them up there. If not, if Brian Gracely is in the house. Oh, we got one back here. Hey, Natali. Hello. That was a very good example. And I wanted just to know how developer develop from their machine. So you show us that from the coming to the production is 18 minutes, which is great. Fantastic. But I was wondering if developers develop on their machines with some mini shift or directly on the cluster? So we're not using mini shift locally yet, but it's something that we're experimenting with. This has definitely been a challenge for us. Our local developer workstations, minus mine, are kind of locked down. And so that's been made it difficult to run things like mini shift and be able to validate locally. So right now, we allow developers to work locally and connect out to other services running in our development environment. So if you've got a service that's calling out to multiple other ones, that's how that would work. So we try to have them isolate those components and just use call out to other components that are running in our lower environment. But it's definitely a challenge. All right then. Any other questions? And I see a hand way over there. So it sounds like you're using Jenkins kind of as a replacement for the source to image functionality. Can you talk about once I have source code, what's your process for setting up an environment in your Jenkins environment? Do you have multiple Jenkins servers and then your OpenShift project and those resources? That's a great question. It was something I had to cut out of this presentation due to time. But we give each development team their own Jenkins instance, which they manage through those administrative jobs at a central machine. So if you need to spin it up or upgrade it, you can go to that central place and manage a Jenkins server. Then from there, we use the source to image process. So we use the Jenkins Kubernetes plugin to spin up dynamic pods to package artifacts. And then we send those package artifacts into a source to image build that combines with a base container that my team provides. So we have a base, nginx. We have a base Java application. We have a base for basically all the different types of applications we support. That's a great question. Thank you.