 Good morning. So my name is Luke Heidecke. I'm with Selenia Incorporated. I'm a consultant with them for enterprise companies. I've worked in Germany for our customer there, and also now I work in LA for one of our media companies. I focus a lot on infrastructure automation, DevOps processes, cloud infrastructure, and then sort of the ways of tooling with organizations, helping them figure out tooling and processes and people's sort of skills necessary to adopt and move their applications into the cloud. So today I want to talk a little bit about the challenges of enterprise, the sort of issues with technical debt that we sort of encounter across various customers, the challenges of sort of gathering the state of the current baselines and ways to sort of look at planning your baselines that are going to be used in the cloud and then at some point perhaps into containers and things like that. I want to talk about sort of making sure that things are prepared and clear as far as the requirements and processes and things like that as you go forward. The next thing is really focusing on foundational images, identifying the various functional procedural and security requirements, and building those in layers, and then really decomposing those to reduce complexity, create a working baseline, and also talk a little bit about the strengths of immutable images, especially as you move towards containers. But also the sort of practices really lend well to clear working baselines with an enterprise environment in the cloud. The next piece is sort of taking that and growing continuous improvement, the cycle of capture, create, test, and iterate, making sure that everything is auditable, understood by all the teams, and manageable by teams in a clear automated process. So the customers that I've been working with, my colleagues have been working with, it's clear that there's sort of years, this burden of years or even decades of technical debt, the legacy of sort of past decisions and traditional sort of architectures, old operating systems, I'm sure any of you, my customers running CentOS 6.5 earlier baselines that have to be patched and built upon as soon as you install, there is instantly a need to update and change and modify something that was just freshly installed. So a lot, and I'm sure there's even older versions of that still running. I know that there's a lot of people and our customers that still have systems that are precious snowflakes that can't be touched and a lot of that is making sure that the process is automated and fast enough and reactive enough where people, application owners and development teams don't feel as paranoid or like they have to keep their babies intact for ever. Part of that is also sort of capturing the processes, the security, organizational, the sort of history of processes built up over time, not necessarily aligning with baselines within OpenStack, within public clouds, or especially within sort of Docker images, things like that. It's, I know, very difficult to sort of take those and align those various requirements and break them down into workable baselines that make sense with a fully automated image build. The other thing we've seen a lot also is documentation, not fully documenting, the document's not reflecting the current state because it's painful if it's not automated and sort of, especially new people or new projects spinning up, if the documentation isn't clear, it can be very painful for engineers. One of the pieces I noticed I saw on Twitter, the issue with sort of the 10X engineer, an engineer that comes in and does work and then they require 10 times more engineers just to kind of come up and clean after that engineer and figure things out in the future. So it's a lot of our sort of pain points for customers has been realizing that automating and fixing and documenting things from the start in a clearance concise manner, even if it means that sort of you're simplifying a little bit more than you think you have to, the initial release, it's really helpful and important that you have those processes from the start so you don't have to clean up and send the army of engineers and to clean up later for that kind of thing. One of the other challenges is the sort of idea of parallel and divergent baselines so that whether it's an application team, or other sort of operations teams or locations with an organization may be starting out with a common baseline back when 6.5 was first released in 2012 or I think actually earlier than that. And everybody likes the little Smith Lake system so we sort of see a lot of teams creating something on the baseline and then it gets handed off to maybe a security team and they do some manual installs and then it's sent back to another team and then suddenly it's diverged and the system hasn't even gone to production yet. So a lot of the stuff is making sure that within the tool chain and the processes for baseline creation that it's clear enough to all those teams and that everybody's held the same standards. One of the other challenges there is the timelines, so if those processes and tool chains aren't usable by all the teams, people start to make sacrifices and sort of the name of shipping it now and you need to make sure that those tools all sort of take that into account and that organizational oversight that there's the control of baselines and these images is maintained in sort of orderly fashion that's common across all the teams that might be collaborating on this sort of thing. The next piece there, so the requirements. Really taking and decomposing what you have in your stack currently, what you're moving into the cloud or if you're fortunate enough to be able to do Greenfield deployments. Some of our customers are just doing Greenfield deployments like microservices and Kubernetes and they're giving sort of a good opportunity to have a new Greenfield deployment where they're sort of taking, they're leaving their legacy systems and they're saying, okay, that's maybe still a data source, maybe still some other backend systems but we're going to really make sure that our new systems are focused on the best way to do it within sort of the microservice compute environments that they're trying to implement for tomorrow. So taking the various application containers, application dependencies, environments, environmental configurations and locations and things like that, breaking those down as functional requirements. And then it goes into security. So making that part of the integrated process. That's a lot of the problem that we also have at a customer, especially in finance and government and et cetera is this mountain of security requirements that need to be integrated into an overall baseline and instead they're treated as a layer on top and not considered as far as the sort of systematic approach to baseline creation. So making sure that those are decomposed, that you look at, there's a common understanding of what the requirements mean and this goes for all of these pieces and really looking at implementations that make sense for an automated cloud deployment. And then from there, really, we look a lot at things like making sure that host-based firewalls, whether it's SC Linux or App Armor with Ubuntu, making sure that those technologies are also thought of in a systematic approach so that you're not sending it off to the security team just to do a bunch of work that those are really automated and included in the baseline as well. The next piece is sort of the procedural, making sure that a lot of our work has been with customers figuring out, things tend to be, some of these decisions seem to be in silos where it's the network team or the engineering team that comes down from high and decides to do a tool or a component within the baseline and not keeping in mind how it's actually gonna be used within development tests, the entire sort of application lifecycle into deployment. And that goes kind of in the monitoring management to seeing, really evaluating the tools that you're using to make sure that they can be installed in this sort of fashion and that they're common throughout the systems. And if it's not something that can be common within the application stack that it's not used or that you look, evaluate a different tool, that goes for logging metrics or even security monitoring, et cetera. So from there, I wanna talk a little about how we've worked with customers to decompose that second and start building into a usable cloud image. So the big thing is starting with the minimal set of, for your baseline, whether you choose Red Hat, Buntu, making sure that you go with the minimal and it's that sort of, Buntu was talking with some of the snappy stuff also, really only using what you need to use for what you need to have as dependencies for the rest of your application stack. And that the baseline should be portable across functions so that you're not having a baseline for every single application or functional team that you really, through that, reduce the complexity so that you're not having to maintain you can focus efforts. Part of that there too for us has been recommending customers really focus on separating out environmental and location configurations from these baselines. So not locking yourself into this is our West Coast or this is our Europe deployment but we have a single cohesive baseline and we store things maybe in tools like console or it's at CD, COPD also is another great tool. And this especially is nice if you're preparing now as our customers start using, sort of microservice, Kubernetes, et cetera is it lends itself well to those sorts of things also. Certainly Redis Zookeeper in the past customers have also used some of those tools but I think there's a lot of, it's nice if the tools purpose built for you in that regard. The other piece we look at is the reuse of components making sure that that single image catalog is focused by a team and then it's not spread throughout an organization and not siloed between use cases so that in a departmental baseline so that people can kind of flow between areas and you can focus on start going towards service teams and you're not having to relearn a lot of this thing as people move across the organization. And that kind of goes also into building for production so that as you carry these images through the application life cycle during development tests and production that there's no wasted effort in each of these teams all touching it. The next sort of thing is the baseline now making sure that you're not reversing changes in the future, you're not having to back out patches, things like that as you go forward. I could talk a little bit about the minimal Ubuntu images, minimal CentOS, Red Hat, going into some of the container operating systems like CoreOS, the project atomic tools like that but even if you don't choose to use those using that same sort of methodology where you take the very minimal set of dependencies for your application stack and then build from there. If you're not familiar, CoreOS and project atomic those tools are very container-focused where it has the base sort of user land set of services and everything on top that you build runs in the container. So that's very interesting but using that same sort of thought process of the baseline that you keep everything as minimal as possible and then build on top of that with your application stack. Again, building in security, integrated with the baseline, making sure that whether you use the S-benchmarks or other tools that those are built in and that those considerations are included from the start of your baseline that the security configuration team isn't held to a different or separate standard from the rest of the teams that are contributing to the baseline and application stack. And that kind of goes into the management and the testing being automated that you're not sort of layering something on like security and that that's not verifiable and every piece of this should be tested and should be verified after the fact. The manual steps that we've seen just makes for headache later with troubleshooting or developers and things like that. So it lends it nicely to that sort of idea of building for production. That also goes into sort of the integrity of images, building towards immutable images. It's a great thing within containers. Keeping things from the start and if you change your baseline, start thinking about redeploying services and it's rather than this constant iteration of a service that was deployed, not the service but an image that might have been deployed years past. It really sort of lends itself well to making sure that the integrity of the image and the system can be maintained throughout deployment. The other thing is the security updates being timely. So if these are all automated processes, automated installs and tests throughout the entire thing rather than sending off a VM to an automated test team that runs their checklist of 4,000 different tests and maybe only gets through a quarter of them, that these things are included as a cohesive continuous integration, continuous deployment pipeline. So that as new dependencies come on board, as new services come on board, that you don't have to wait around for the latest patches, the latest changes, things like that. You're guaranteed a fully automated pipeline there. So we'll go into next. Talk about creating that foundation, some of the tools we've used with teams and organizations that we have as customers right now. So the criteria for configuration management that I sort of like to think about as we talk, because we're very fairly agnostic about which tools, because organizations are different, there's different levels of language proficiency, there's different levels of if, for example, if you're a Python shop, it may not make sense to force in a configuration management tool that your developers and your admins and various teams are gonna have to use that might be Ruby. So focusing on the language, having a declarative language that says what it should be, not how to get there, so that it's clearly understood by all of the teams what the end goal should be. And that goes well to also being able to write tests against that, so it's very clear that make it so and then verifying that it actually is so. The agent list I mentioned, making sure that you can take that same configuration, that you're not changing how you install software or patches or configurations one way at bake time and then changing that and doing a different process and having to sort of duplicate those processes and some other system once it comes online. So you're deploying that and you're enforcing from the start of the build rather than having to maintain two different baselines or enforcing the same sort of configurations. And then with that, taking those configurations and making sure that it can be easily version controlled, that it's in a format you can check in to get, like SVN, whatever method you use, it needs to be version controlled and traceable back to when an image was created, what was the state at that time. Really treating everything as code, the infrastructure as code thing and then DevOps, it's really recommended and it really has helped customers to not have to, some customers have used systems that are very difficult to version control, you said do it one time, it's a lot of it's more legacy systems and really recommending, really we've tried to recommend customers to move outside of that sort of old paradigm and move to tools like Chef or Puppet, Ansible that can sort of lend itself well to this sort of deal. And Bash probably doesn't, yeah. Yeah, so if you move, his question was if you're moving towards immutable images, doesn't that kind of removes the need for that, those configuration management tools after the fact if it's immutable in the first place, exactly correct, but I think they're still at need for having that declarative language and not a mess of Bash to describe that is not necessarily understood by everyone. And as you sort of capture these layers of the images and you might move to Docker files and things like that, and you already have things deconstructed, it certainly is easier just to go to Bash, but I think there still needs to be sort of an emergence of a better way of describing some of these things. And we'll see how it goes. So it depends, if you're going to something like an immutable image with Docker, you might not have an agent running at all and it might purely just be sort of a check that the configuration is still as it is that you're not backing anything out. But in Cloud, if you're on an open stack and it's sort of a more traditional virtual machine environment, you may still have the agent running and force certain environmental changes. It's certainly not a bad way to sort of take configuration, template configuration files, if you even if you store key values in something like console or CD, still being able to sort of take some of those and insert those overlay some of those configuration settings and more traditional systems. We'll be interested to see how that kind of plays out the next year, six months, if there's better ways of taking some of those because my big concern is making sure that you're not duplicating effort, right? So if you're building a very base minimal Ubuntu image, making sure that you're not doing it in a different, you're taking that same image, using it as your base image for your Docker files. That's where the agent list sort of really lends itself nicely, is that you can still build that in the same fashion and reused if you're going in the Docker, et cetera, and hopefully there's better ways of doing that some good open source project, a little weekend project to sort of look at more declarative languages for describing some of the baselines. But I think if you do it right, especially with Docker files, you're not over complicating, you shouldn't be, if your Docker file or what you're laying on top of your image in microservices sense, if it's paged as long, there probably could be ways to simplify and minimize what you're actually doing to the image from there. So that'll be kind of interesting. Is that can I answer? Yeah. The other piece, yeah, testable. I'll go into some of what that means, but being able to make sure that it's clear, concise, that the end state is known and that your requirements are captured and that you're actually being able to verify what you put within the image. So I wanted to talk a little bit about, once you sort of have these things captured, you have your baseline declared, building it in an automated fashion, taking that information and building it for various clouds. Most of our customers all run some sort of hybrid model where they have an internal open stack cloud. They're running some things, like perhaps on Amazon or Google and now in Docker and wanting to make sure that you can make an image once and kind of carry it through. There's older tools, disk image builder is part of the open stack umbrella, Oz and really Packer is kind of identified as seems to be sort of the best tool for the job. The nice thing with Packer is that it's easily, that goes back to the easily understood templates, writing once and being able to build on, for VMware, Amazon, QCAUs, Google and Docker images, it really helps to be able to not have to, have scripts and automation for five different platforms. It's really easy for, it's been really beneficial for our customers to be able to use this tool. And the nice thing too is that the support of Vagrant, a lot of my time is spent on my laptop, whether it's traveling or the customer site and being able to take that same image and run it within Vagrant or within Docker and moved mostly to where it's like, all my local development and builds are done in Docker. Being able to use that same baseline throughout is really helpful so that, again, from the entire application cycle, you're using your same exact baseline and you're building it for production. One of the issues there too, oh, the nice thing is, sorry, we'll go back. It's the part of the largest CI CD pipeline is that a couple of customers when we first moved in and sort of were capturing requirements and talking about processes, the process of getting an image baseline that was up to date or for a new system was at best a week. And even if it's a few hours that I'm doing testing and I'm trying to iterate over and change something about the baseline, if I have to wait hours to figure out what changes happened, it's frustrating and my patience wears thin on it. So having that sort of automated process with something like Packer being able to build locally as you would build within a larger CI CD pipeline has been really helpful for us. The next thing is I spoke about testability, making sure that when you choose tools that they're testable. Using tools like ServiceSpec with spec infra, it allows you to take Ruby, RSpec tests and testing it's your infrastructure. So get information about your infrastructure, be able to take declarative statements like it must be have Apache installed, it must be running on port 80, it must have these configuration statements. That's all available within ServiceSpec and the kitchen CI allows you to take those ServiceSpec tests, run it in a test furnace that's runnable on, we run it on Amazon EC2, being able to do OpenStack, Docker, et cetera, and not have it be this sort of manual smoke test process. I don't know about you guys, but my days of running checklists like that and manually doing a lot of these verification steps are kind of over. The next piece, version control, just making sure that there's that level of historical information, that you can roll back images that you're not only saving the configuration that went into the images, but that you can also store your images in some sort of artifact repository. Even if you don't need to keep five years, but some level of rollback for at least what's in production to know what's running where and being able to recreate those sorts of environments is really nice. And also making sure that there's some level of image integrity is certainly nice. Tools like Artifactory, Nexus, we've also used for our Docker images and other baseline images have been really helpful with that. So from there, we really look at improving and iterating over improvement, moving towards the immutable application images, enabling the sort of zero configuration drift. It's very obvious if there's no changes to a baseline that there should be no changes and you can very easily report and audit against that. It's a great foundation for creating container images within Docker. And the more sort of automated these processes are and the more quick turn, the quicker turnaround that can be enabled really helps with decisions of pain points for how difficult is it to redeploy an application or a service? How difficult is it to move to things like A.B. testing where I'm bringing up another instance of a service with a new version? I don't want to have to wait around and I don't think most development teams that we've worked with certainly having to worry about how long it's going to take for new versions or new images to be created. That sort of timeline really affects some decision-making so the quicker, the better and the more likely teams are gonna actually be able to use these baselines. And as you're sort of getting the full test harness in, I've seen a lot of power being able to easily test what affects on the baseline, how it affects downstream images. So being able to compose your application stack, try it out with a new version of your baseline and then sort of easily see how that affects this version of Apache or any service that you might be running. So from there, I just wanted to talk about, again, remind, really focus on overcoming the baseline drift of having 10 different parallel baselines, reducing those sort of silos between teams and baselines, minimizing that complexity, considering the sort of baking process of when you first create an image, be very deliberate on how you're going to, or what you're actually putting in that baseline and making sure that there's the duplication and sort of simple configuration with those images is all thought about from the start and you're not having to go back and sort of rework endlessly. People want to sort of create the baselines and not have that be your entire career at some point. Using the right tools has also been really important to remind organizations, making sure that the teams that are going to be using these tools, the organization structure, the best practices for these tools are all sort of taken into account and sort of seems obvious at first, but making sure that the decisions, just like your applications and your team shouldn't be operating in a silo, that you really shouldn't be making the tool decisions in silos either, so making sure that at the very least that the operations, developer, tests, security teams are all taken to an account when choosing the tool chain. Automating and verifying, making sure that the configuration management is declarative, it's easily understood by the maintainers and the people that will be auditing these images, making sure that they're versioned and testable, and then really building for production, making sure that there's one image that's used that's easy, that meets the needs that can be used throughout the entire life cycle and not added on the last day right before production. And one of the things I wanna make sure you guys know, at 1.50 p.m. this afternoon, Seth Fox and Spencer Smith, also from Selenia, will be talking about Packard doing a demo on it, showing some example configurations. Hope you can make it, it's in Wakaba, and also the basic we are hiring. We do a lot of travel all over the world, consulting about the same sorts of problems. Love to talk with you and any questions? Got a little bit? Anything? Okay. Ah, so, yeah. So the question was, how many of our customers are actually looking, actually implementing the immutable images right now? The customer I'm working, really focusing on for the most part right now, is heavily invested in that, especially as they move to microservices within Kubernetes and Docker containers. They're very committed to doing that within the organization, to make sure that all the benefits are gathered there. So the obstacles of moving towards immutable images, making sure that requirements are clear, what needs to be, making sure that all the sort of invested parties understand what needs to be done, making sure that there's also buy off from the maintainers and the operators of, and I think as organizations move more towards the sort of service teams, rather than developer teams and maintenance teams and sort of separating them out, I think that's a really good step. And it kind of, as long as there's buy off, that the days of allowing people to come in with some sort of, one off bash script or hack together Python script to change something in production because it needs to be done right now, making sure that it's clear and it's enforced within the organization that that's not the way things are happening. So that's one of the biggest challenges making sure that for us that have since so far, is that there's buy off from the entire team and that people are gonna stick to it when even if it seems like challenging. But I think if people with automation, with that sort of verification, with a process that's agile enough and a set of tools that you can turn around quick enough, I think it becomes very obvious to teams really very quickly that they don't necessarily have to sacrifice that sort of tooling in the name of expediency as long as it's enforced, right? Anyone moving into a mutable approach without breaking the monolith? I mean, having big images, moving them into production and considering on the implications of frequent updates? No, we don't have any, I don't believe we have any customers that are looking at sort of, taking the sort of monolithic approach. I think even if you're not doing microservices, taking, we're not taking the monolith approach. We have a problem, we have a large legacy enterprise applications and they happen to be monolithic. Yeah. I think it depends on, if the architecture supports being able to redeploy, depends on the frequency, right? Of change and if there's a way to get, it depends, so many things are like, if the applications can support that, if the rest of, if the services downstream, if it's something back end, can the other services sort of accept that sort of way of doing things? Certainly if you have other tools or those monitoring agents that you sort of, in a lot of traditional enterprises, sort of collection of like five different agents running on a system for, your metrics, your logging, your, all these different things, some tools just don't expect that sort of way of doing things. So, but it'd be interesting to, yeah, interesting challenge. But I think you can still get like closer to that sort of deal, to keeping things immutable without, you know, there's might be compromise there. It'd be interesting problem. Is it a back end sort of application or? Okay. All in the same server or? Ah. Under the server. Under the, yeah. We're just for how monolithic. Interesting. Any other questions? Okay. Thanks a lot guys. Have a good one.