 One thing that this enables, and I think this is probably something that is kind of popular, is in a traditional system environment, or I guess it's interesting that virtualization has become traditional in the sense that it was once kind of the late breaking, but it is kind of traditional technology now. Most IT departments are running in virtualization. Most of us are running on our laptops to do daily development work, those kinds of things. You can migrate an application, let's pretend that it's written in language X. I don't want to pick one because I don't want to get into discussions of religion while I'm presenting. The language X and language X version one may or may not be compatible with version two. Let's say, for example, Fedora 22 is still sitting on language X version one. But we want to take advantage of some new feature or some new performance component that has come out in Fedora 23. Well, we can take a platform container base image based on 22, put our application inside of that container, and then run that container on Fedora 23. And this enables us the ability to kind of take application run times away from being bound to the host in a way that allows us to actually not have to duplicate the operating system. So we don't need to duplicate the kernel and the entire file system tree and glibc and langpacks and all of these different things. We don't need to duplicate that. We don't need to share that across storage. Now, I do want to say that containers don't necessarily replace virtualization in many cases because of the nature of different aspects of it. But you do have this new ability to do that. You kind of are afforded this luxury. So moving on, something that has kind of sprung up or I'm not sure which is the chicken and the egg in this one, because microservices have become more popular as a side effect of containers. But containers also are becoming the de facto standard mechanism by which we deliver microservices. So microservices, again, this is another topic that's not entirely new. I like to reference it back to the microkernel idea. And microkernels have existed for a very, very long time. These can absolutely buy beer in most countries. And the idea is that you have different small components of the system that interact with one another through some kind of inter-process communication. And you could potentially have the inter-process communication be across the network. And if we were to default to a network, we could then geographically disperse components of the system, and we could replicate components of them and load balance different components and allow them to go up and down. And microkernels kind of do that. And for anybody who's vaguely familiar with microkernels likely knows about the Amoeba operating system. And that's kind of what they did, was they had the ability to have multiple network connected computers have different components of the low level system running anywhere on the network. And it presented itself as a single computer. We take that idea kind of up the stack, further up the application stack and our kind of hierarchical, I guess, data model for what layers of the stack there are for applications. We have very similar parallels in the sense that we can now have tiny components of what were previously monolithic systems and kind of move them around. And you could effectively argue that system administrators have been doing this for a very long time using pipes. I mean, you effectively have these tiny applications that do one thing and do them well. And you submit inputs through one side of a inter-process communication mechanisms and comes out the other side. And then you can kind of daisy chain those together to create a solution. Now, your insane 30 utility one-liner is maybe not the best way to run an enterprise, but it's a decent example of what we can do with some kind of pass through of small utilities. So service is the Unix way. The Unix way being a giant quotation, huge air quotes around that because you're going to get a different definition of what that is depending on who you talk to. However, I like to believe it as you do one thing and you do it well. That's kind of the distilled version of it broken down. So you can kind of decouple previously tightly integrated components such that they are more loosely coupled. They can be more interchanged. As long as you comply with this standard, some kind of an API standard that you come up with, you can plug in, replace different components of this. And if you have them loosely coupled, you can kind of take that idea from the Amoeba micro kernel operating system and geographically disperse different components of the system if you want to. And in such a way that they can interact just like as though they were local. And if you do this over the network, well, you can do it local host and you can do your loopback network. And if you wanted to host it all in one environment or you can geographically disperse it. And we get into a world where we have infrastructure to service clouds. You have cloud providers that allow you to geographically disperse this literally across the globe in multi-zone, multi-tenant, all these different environments. And that will hopefully add redundancy and resiliency to the application to the service. And what we can do with that is have smaller components that are more focused and more worked on, more easily testable independently in a way that we can iterate on them faster and we can keep code quality up. And just hopefully get to a point where development is faster but we're not sacrificing on quality. So immutable infrastructure, this is actually a new one. This is something that has kind of popped up a little bit more recently and it's kind of gaining ground. And some of the newer technologies that have come up, containers being one of them, the more proliferation of people wanting to use microservice architectures have kind of led to this. Immutable infrastructure is effectively fully automated. It should be able to be deployed, redeployed, torn down with minimal human interaction. And the idea behind that is not so much that, OK, you can fire up some virtualization templates and have some kind of a post-boot task run and reboot them into new updates and then run your configuration management. The goal is at deploy time, you're done. And that's kind of what we get to. It should be static. Once deployed, you don't change it. If you need to make a change, you redeploy. It's this new paradigm of don't config management in the environment, config management at build time, and don't change the environment. Keep the environment static in nature to the best of your ability. And this allows you to have these immutable pieces. And these pieces can be tested as a cohesive unit. And we can then deploy them and verify that the thing that made it through testing is the thing that's running in production. Nothing out there changed, no mutable state or should in ideal world. Nothing has changed. And that gives us the ability to verify that an unexpected change because of some software update or because of some config management agent running out from under us because new person X or new person Y committed to master on accident for the get check-in for the config management. So what we effectively deploy is a build artifact. We no longer deploy in traditional senses of the word and have these automated configuration management jobs run. You deploy a build artifact, and your build artifact could be a container image. So if you were to take your Docker image and had your Docker file and it ran and it did its thing in, at the end you have this image, you can then distribute that image and start the service. There should be no required added configuration management. And you can then put your configuration management in the build time. So you don't need to run it on the end host because you should be able to run it inside of the confines within the context of the container build, the container image build, such that all of your configuration that you want does it up on the host just like it always has, except your delivery mechanism is different. You are instead effectively shipping a tar ball that has everything in it as you wanted it. And that tar ball, with its metadata, can run within the Docker environment. So you need a configuration change, build new artifact. And then artifacts can then be tested and graduate. So you can have your dev test stage production pipeline. And that build artifact should be able to graduate in between each environment and potentially load different configuration components so that you're not pointing your production database. But you have this idea where this image can go through unchanged, because let's just say, for example, some new update of library Z. Library Z shows up, and you have version 1.1 in dev test and stage. Well, 1.2 landed, and it had a security fix. So the ops team did an update to production. Let's say something changed, and your application didn't take that into account. So then you move, and within the window by which your software graduated from stage production, something changed out from under it, things crash. Your rollback procedure. What's your rollback procedure look like? Well, in some scenarios, that can be painful. With container type technologies, it's very simple, because you can change the tag that points to and restart the service. So these build artifacts afford us some interesting capabilities. So this is good for redback, blue-green, et cetera, deployment models. And I'm going to walk through one deployment model that is an example of this. And I stole these images from Mr. Mike McGrath. Thank you, sir. So let's say that you're running version 1 of your software, and you want to do an update. Take a note out, and you upgrade. Or you have your tests and CI run through, everything passes, and you update on version 1, 2. You put it back in rotation, everything looks good. So OK, fine. Let's go ahead and roll it out to the rest of the environment. Seems pretty straightforward. Should work. Everything's good. Now what happens when something breaks on one of the nodes? Let's assume that you can just think of the Wallace Doomsday scenario you can come up with. Somebody just walked into the day center and kicked the power cord out from under you. Something crashed. Somebody put a really bad custom RPM trigger in one of the packages that your infrastructure team runs for whatever version. Again, new person, XYZ, shows up and commits to master on accident, and that gets packaged and rolled out as part of your deployment automation, something. How clean is your rollback procedure? How do you verify your components? How do you know what state your file system is in? How do you know what state your kernel is in? Let's say that the power cable got kicked out in the middle of the kernel update. Let's say for some reason it's generating init ramfs. Your changes are in grub, but you haven't finished the init build, the draket run, and you don't reboot. How do you log in to your system if it's somewhere in the cloud? Well, you can. There's the web console, and that's clunky and terrible, and we go through, and we do what we must. But what if we could avoid that? Also, do you know how far a package made it? By show of hands, are you familiar with the RPM package triggers? About a third? That's good. OK, here they are. This is from the documentation that comes with RPM. This is in user share doc RPM. And this is literally what it says. In all package managers have this, like RPM is not special, and it magically has these weird flaws, that every package manager has to have an order of operations, a steps by which it goes through. And this is kind of what you have. And at every step of the way, some script or some trigger can take effect and cause a side effect. So if we're doing this in an upgrade timeline for your application in your production environment, this is mutable state. This is something that could potentially go wrong in the event of a failure. Whereas if all we're doing is an all or nothing, a update to a new deployment image, the worst case scenario is you roll back to the previous deployment image by changing a tag and restarting your service. So what if we take that concept a step further? And we had immutable operating systems. So that's where Project Atomic comes in. So Project Atomic is an upstream project based around taking concepts of immutability and the idea that you can have these deployment artifacts effectively. You have a build artifact that can be tested as a cohesive unit, that can be applied and rolled out as a cohesive unit that is all or nothing. You're either upgraded to it or you're not. And it includes some newer technologies, and it's also built on top of more traditional technologies in the sense that we're not reinventing the world overnight, but we're doing iterative improvements on the world that we had before. So we have a lot of our tried and trusted. And I mentioned that it's an upstream project, and we have both Fedora and Sentos, because both Fedora and Sentos are working with the upstream Project Atomic team to create atomic technology-based distributions. Being part of the Fedora team, I'm going to talk specifically about the Fedora Project Atomic. But our friends in Sentos land are working with the upstream as well. So it inherits everything from the parent distro. So everything that you previously had in terms of your RPM sets, what you expect to be on the system, those kinds of things, you're going to find a lot of them there, everything, all of your standard tools, those kinds of things, you're going to find them there. What's changing is the delivery mechanism, the delivery mechanism by which we update our system. And that's going to take a little, there's going to be a little bit of education in terms of getting people up to date with the newer technologies, and as there always is. However, there's also kind of an added aspect to this, to where in an immutable environment, you don't want changes, so you don't want to necessarily do package installs onto a live system. You would instead build a new deployment artifact. And I'll talk a little bit about that. So it's a minimized footprint. What Fedora Atomic host is at face value is we are aiming to do a minimized footprint. We're trying to have it tuned and be out of the box the best at running container type workloads. Hopefully we succeeded that. If not, show up to the Fedora Cloud Stake, let us know. Participate, we're always looking to do better. So atomic updating and rollback means that it's easy to deploy, update, and rollback using OS trees. And what OS trees are, I'll actually talk about in a minute, but that's our new deployment artifact in terms of how we actually just do that. And then orchestration, and that's where the Kubernetes piece comes in, and we'll talk about that briefly. So I'm checking how I'm doing on time here. All right. The orchestration piece allows us to, oh, 10 minutes? OK, all right, I'll go a little bit quicker. I thought I had more time. I do not. OK, so atomic hosts. Deployments are upgrades of RPM OS trees. And the OS tree is this entire root file system tree managed very similar to Git commits, in the sense that you have a reference that you can revert back to or roll forward to, and it gives you this ref ID that you can now move around. RPM OS tree is a utility and a technology that allows us to build OS trees out of sets of RPMs. So you can use these packaged piece of software that you've always had, but then put them into an RPM OS tree, and then you use RPM OS tree to be your distribution mechanism of that build artifact. Upgrades are atomic in nature, which is a lot of where Project Atomic got its name. It's all or nothing. So if you're in the middle of an upgrade, you kick the power cord out. We don't know where it was on the RPM trigger. We don't know where it was in the kernel update. We don't know if that Dragon run finished building our init ram FS, because all of that gets sorted out at build time. And when we're doing the actual deployment, it's just deploying the build artifact. And the entire tree is a cohesive unit. It gives you the ability to test this as a single thing. So the atomic command is currently a wrapper on RPM OS tree and Docker. Our atomic host is host-based commands. Atomic other commands do interaction with the Docker daemon. Host upgrade, you can see we're doing an update from. It has a whole bunch more output, but I didn't want to clutter my slide too much. Atomic host status. These are the references I was talking about. You have these IDs, and it talks about the, you have a version number, and you can also actually go in and inspect what RPMs are in there. When it doesn't update, it actually tells you which RPMs change, what versions they've been updated to, those kinds of things. Orchestration. So we have this immutable infrastructure deployed in place, and we have these atomic, Fedora atomic deployed operating system images, and we're running containers. How do we run a bunch of containers across a bunch of hosts? Kubernetes. Glad you asked, thank you. So it's distributed orchestration for containers, and there's a bunch of different vocabulary terms that come into play here, but kind of the few main ones are pod service, replication controller, yes, I think those are it. So pod is a set of containers, and they are scheduled as a single unit, so they will go to a node. They share a number of aspects of the systems, process ID space, IPC, network, UTS, and this allows them to speak to each other as though they were on local hosts, because they will be. A service is a set of one or more pods that can, each pod can be distributed to different nodes, and the service brings them together as a cohesive unit across the environment. From there we have a replication controller that manages those pods, and then node level proxy for load balancing to the services, and then pluggable options for overlay and persistent storage providers. Developers. I did not forget about developers. All of everything I've been talking about in a lot of ways is catered towards, or at least my hope was that the tone of it was catered towards ops teams. However, the development teams, you can take these concepts and these ideas and apply them to your development lifecycle. That's where OpenShift Origin comes in. OpenShift Origin builds on top of these concepts and on top of these technologies and provides a standard containers API, it provides kind of a self-service out of the box dev panel such that developers can pick and choose the components they want. They will be deployed using these container technologies and the developers are presented with a development environment that they can then just commit code into. The code will go through a build pipeline that is completely configurable, completely scriptable, has Jenkins plug-in APIs, and those kinds of things. And you can then take that and you can either run it using OpenShift because OpenShift is built on top of Kubernetes in production, or you can take that container, you can take that definition of an application out of that environment and then run it in a directly Kubernetes base environment running on top of Project Atomic for a fully immutable infrastructure-based pipeline from dev to production. Ran out of time. Sorry, I meant to cover a few things more. Do I have questions? Yes. OK, the question basically built down to is a pod in Kubernetes always on a single host. Yes. Yes. And if that changed, I will admit that there is a asterisk on that, which is the best of my knowledge it was if that changed, I apologize. But yeah, it was as defined as of not that long ago as a single host that had intercommunication between the containers inside the pod. Yes. OK, so the question is basically do I have much perspective on how many people are actually using configuration management in build? I don't necessarily know that I would highly recommend it as the path. I generally offer that up as a stepping stone. As you move into this, everything that you used to do in config management can now be put into the build time. And as you adopt the newer tech, absolutely, you should be using Kubernetes secrets and those kinds of things to inject and supply config data to your containers. But that's probably something I should be more clear on. But yeah, I know of about half a dozen people who do inject their config management at build time just because they have so much investment in all their config management to be able to containerize applications that just made sense. OK, the comment was use config management to build your Kubernetes files to begin with. Which, yeah, absolutely, you could totally do that. And I think that makes a lot of sense, especially if you, again, have a lot of investment in a config management product. So one of the teams I had previously worked on had many years of investment in a configuration management infrastructure and had many, many services built out that way. So the first stepping stone into the container runtime world where we started running things in production in Docker was to just inject the config management runtime at the build. Any other questions? One in the back, yes? Yes. OK, so the comment was effectively that the consensus at config management camp was not that config management is dead or dying, but that it needs to evolve into the newer technology and the newer workflow is similar to this. Did I do OK on that? All right, cool. Any other questions? All right, thank you all for your time. Sure, like, yeah, whether they can just grab it or. Oh, here. Hello? Hey, if you ask the question, please come see me. I have swag for you. So that's all built into your. Have you copied? Yes. All right. Thank you very much. That's all built into. Thank you. That's all built into how to do the traffic and how to do the. Oh, hello. Excuse me. How are you? Very well. I'm working. I was in the office this morning. I was going to do this. This little one. I remembered. Oh, yes, it's going to be voluntary. How are you? I'm all here. Roman. I've seen Roman. I'm going to be there. Yes, everything is fine. How are you doing? This is my first conference. I don't understand much, but I try. It's also as an introduction. Yes, yes. I'm going to take a shower quickly and then I'll be back. Hi, Dan. Hi. I'm Yulia. Yulia, nice to meet you. Which one is still? Or do we have only one kind? Let me check. I think it's all still. All still? Yeah. Then it's easy. Yeah, yeah, yeah. Say all the winner. I think all the winner we have is actually still. It's still excellent. Yeah. Just a few announcements, please ask questions at the end of the presentation. You can get some cool swag and also pretty please gently close the doors when you leave the room or when you come back because it can get very disturbing for the speaker. And in the meantime, please feel free to tweet and blog about Defconn. There is also a competition for the best blog post. So you can also win some prizes for that. Because you have full HD resolution and downscales it too. If it's okay with you then yeah, otherwise you can just go to settings and change the resolution in a second.