 Today, I'm really pleased to have with us Mike McGrath, formally of the OpenShift engineering team and one of the founding members of OpenShift who has recently switched over to work on Project Atomic and lead some of the engineering efforts over there. And he's going to give us a little bit of the background on what Project Atomic is and how it works in context with OpenShift, and so I'm going to let Mike introduce himself and go from there. Hi everyone. Like Dan said, my name is Mike McGrath and I'm currently working on the internal team at Red Hat, their name is under application infrastructure. And basically my role there is on the architecture side, I help try to pull a lot of these different pieces together. And Dan, can you mute your point? Yep, I'm doing so. And so my role is just to kind of take all of these different pieces and merge them together into what becomes Project Atomic. And I'm going to talk to you today about some of the overlap of the two and how they work together. And I'm also going to talk a little bit about OpenShift and how you can actually take Project Atomic and Atomic hosts and actually very directly integrate them in some of the plans that we have for that. So first thing up, Project Atomic. So Project Atomic and these Atomic hosts are a new way of developing and deploying and managing operating systems. And so this is not your grandma's operating system that we're creating here. The Atomic hosts provide a seamless way to sort of upgrade your operating systems. And so you'll see some examples later. YUM is not on these systems, none of that. OS Tree is an actual new technology developed by Colin Walters that allows us to upgrade an entire operating system at a time as opposed to just parts of it. And so traditionally what you think of as a YUM upgrade, you might get a bash update, maybe Jboss and the rest of your operating system persists. This is a little bit different than that. The idea here is that the hosts are very small. And so this is an actual command that I ran inside of a fedora-based Atomic host just before the meeting. And it has about 319 RPMs on it. The idea is that once you have this operating system up and running, that you will be pulling everything else in by containers. And I'll give some simple and one fairly more complex example of that later. And so in the container sense, we're coming in to use Atomic. We're kind of assuming you already have your containers made. And that is one fundamental difference between us and Overchef and Overchef is providing sort of build environment for you. And deployment environment, our system is much more simple. So you've got to take your containers or consumers that already exist, your containers, and deploy them onto an Atomic system. So I'm sure that you kind of get it a little bit different. Well, here's some examples on how you'd actually use the thing. You can download one of the Atomic operating systems. And you can do this through three different ways. There's a fedora-based one that is very brand new. There's a sentos one that's a little bit older. And then you need a fully supported environment. Obviously, we have the redhead supported, redhead and press Linux version. You can check out more of those at projectatomic.io. But once you have it up and running, you'll notice that some of the commands you typically use aren't there. But also, parts of the operating system are actually mounted read-only. So if you try to remove slash user bin, you can see here that that failed miserably. And that's because this is a very different way of doing your operating systems. So one of the things that you'll notice when using Atomic, and one of the commands you run a lot, is the Atomic command. Atomic ties into a couple of different things on the system. Atomic host actually ties into the actual host itself that does the upgrades. And so Atomic host status gives you information about what you're currently running. In this case, you can see I've got a timestamp from March 24th. It's running version 22.24. I've got a basic ID and the OS name is fedora-atomic. It's pretty straightforward. So the question is, this is a pretty old version at this point. I would say you need to upgrade to something newer. To do that, you run the Atomic host upgrade command. And this actually goes out to a fedora website and downloads information from here. In this case, it downloaded some metadata and some objects. And you can see the transfer time from the remote mirror took around two minutes, 120 seconds. And in that, it downloaded several new packages and added a new package docker 150. And even though I've run Atomic host upgrade, the system is still running on that old version. It's downloaded all this new stuff, but it's not actually using it yet. And so in order for me to start using it, I have to reboot. And so this is something fairly new to the whole chickens and pigs things where we want to reboot often when changes happen. So systems like OpenShift and other things like Kubernetes can help make your applications highly available. But the expectation is on the Atomic side that when you upgrade, you're doing a full reboot to bring that system back online. And so once the reboot is done, if something went wrong, you can also run Atomic host rollback to rollback to the previous version. And it keeps multiple versions at a time. In this case, the rollback took only three seconds. And I've cut a lot of the output from both of these just so they fit on the slide. But it actually does tell you what packages changed, or added, or removed, and everything else. And just like before, when you do a rollback, you have to reboot to get that system back online. And so just a quick run through, we did an Atomic upgrade. We just pretended, we rebooted and pretended something was wrong with it. And then we rolled back. And if we run the Atomic host status command now, we can see that both of these versions now exist on the system. We can select from either one. But after that second reboot, we have officially rolled back to the previous version that's noted by the star here. And this is configurable. You can select multiple operating systems at a time. And if you want to know more about how those work, I suggest you just get started. But just a quick touching point on some of the other things that you need to know. Slash Etsy more or less persists between upgrades. And so if you have a situation where you've added users or you've made configuration buys in Etsy, those will persists between upgrades. You don't even have right access to user bin or anything in user release. So those will always be upgraded as you go. And same thing in bar, most of that stuff will persists between upgrades. So if you need to make changes and you're storing logs there or whatever, those logs will remain even after reboot. So now you've got your operating system in Atomic up and running. The question is, how do you do other things? And so for that, we've got Docker as our primary container system. You can use Docker run for Apache to get the latest image from Docker Hub in this case. Redhead also has a registry that we have for our official images, but this one is standard community Fedora image. It'll actually go out and download all of the container layers and images that you needed to get Apache up and running. And by the time this command is done, it will, you'll have an Apache image running and exposed and you can actually access it. So all very normal stuff. And just to make it clear, Docker is included in that standard Atomic image. One of the more interesting developments more recently with Atomic is the invention of these super privileged containers. Now in the traditional container world, the idea here is that you have an application component or microservice and whatever. And it's running standalone inside of this container. It cannot see other containers. It cannot really see or interact with the host. But if you're logged in as route to the host, you can do things to that container. For example, you could exact inside of the container. You could kill the process. You can actually operate with a lot. And this is sort of a security model that was designed around containers. And we've actually taken that a step farther with SE Linux and other things to make those containers even more contained. But a super privileged container kind of runs the opposite of that where you want to build an image that may need to interact with the operating system. And so why would you want to do this? Well, remember that I had said that you can't install new software onto an Atomic host. Young was not installed. And as a result of that, our Atomic images don't even have our syslog installed. What if you like our syslog? What if you want to use that for your logging and security information? Well, to use that, you'd need a super privileged container. And in this example, we've got a real seventh syslog container that we've built. And what will happen is our syslog will be running inside of a container. But it will be logging to a location outside of the container. So in this case, you'll see a slash par log. And unlike us, this is a standard Docker image, but it's got some additional metadata in the Docker file that Atomic, the Atomic command will parse and run. And so when you run Atomic install, it will go out and download that new image, which you can see here. And then it will run the Docker run command with privilege. And it will actually print the Docker command that it's running. So this isn't like a Docker fork or anything weird with Docker. Instead, this is a convenient way to run a whole lot of Docker flags at one time. And I've actually cut off a couple of lines of the command here that runs. So by the end of this, this will actually go out and download the image and install some files onto the actual host. And so I could edit the sysloghost in slash nc or syslog on the host instead of being in the container. And that gets passed into the container. And this gets a little complicated, but just in general, all you have to know is that you can Atomic install real syslog, and then you can treat that like a normal syslog server. You can edit slash nc or syslog in the location you'd expect to find it in. It's logging to par log in the location you'd expect to find it in, which is a really neat way of doing things. So with SBC installed, I haven't actually started to run it yet. And so just to prove that I have tried to tail par log messages, all of these commands are being run from the host, not inside the container, which is an important distinction to make. The next command I run there is the Atomic run rel7r syslog. And you can see the Docker command there that it's running. It's passing things like pki and thersyslog.conf into the container. And it is also logging that information outside of the container so you can't see it here because I've cut the wrapping off, but it's logging to par log on the host. And once it's running, you can just do a tail par log messages. And all the information you would expect to see in par log messages is there. And so this is just one of the many things that we're looking at doing with super privileged containers. It's sort of this balance between application data, a lot of that sort of workflow. It's very obvious to people how that fits into a container. But if you're going to be doing these atomic upgrades and these very basic systems, what does it mean to run some more core features like a syslogger or perhaps identity management? What would it really look like to create an identity-based container and have that run and suddenly have all these users available on the system, even though the system itself doesn't have any of those identity binaries in it since they've all been pulled in from a container. And that's what we're looking at doing now with super proven. When you're going through your first boot configuration on Atomic, a lot of people are using cloud init. You can also use, there's an installation ISO. I really like cloud init for things. And basically you start it up, cloud init runs, and it does all the configuration for you. It pulls down all of your SSH keys or whatever commands may need to be run to set that system up. And so our goal with Atomic is to provide the ultimate operating system to run containers on. That's really what we're focused on. We have Red Hat Enterprise, Linux Atomic Host is our supported offering. If you want to use it in a production environment, and I'd also mentioned the community versions for Centhouse and Tamara as well. The real key thing here is that by focusing on the needs of a container, we can pick some really sane defaults and make architectural decisions for you for deploying these hosts. And I think that that's a really neat feature because we can focus on a very specific workload in containers, but inside of those containers you have a more general workload and trying to provide for the 95% provides really interesting challenges for us to make sure these things actually deploy. Now you're probably starting thinking, wait a minute, wait a minute, wait a minute. Atomic for containers, I log into an OpenShift meeting. I want to use OpenShift for my containers. Well, that's where this next session we're going to talk about. So the thing to remember about Atomic is we provide a lot of the building blocks for running containers and clusters. The problem is that it's all kind of DIY and we generally don't go too far into containers. We provide some interesting cool side things that we've built for the Arcyslog, but a lot of the language containers and things you can get and run from anywhere and OpenShift certainly has many of those as well. What happens when you get a whole bunch of these containers or you need to build a lot of them? Well, for that you're going to need more than just Atomic. You're going to need orchestration and other workflow items. That's where OpenShift and Kubernetes come in. So Kubernetes, if you're not built into OpenShift v3, and v3 is currently in a pre-release, it should stay tuned for more news when it will be actually ready, but you can actually go to GitHub now and try it out and help develop it. So the whole goal of this is that you can build these containers, you can run these containers and lots of them at scale, and that's really what we're focusing on with OpenShift. And so using Kubernetes and OpenShift v3, you can actually take over an Atomic host and use that for deployment instead of a standard traditional host. And so Atomic currently comes with Kubernetes installed in it, and you can configure Kubernetes yourself if you want, if you enjoy doing that sort of thing and have the time to do it. Or you could just go download OpenShift and have it do all that stuff for you. And so the real tie in there is that we're trying to, in Atomic, we're trying to provide that ultimate experience for containers to run, including spinning up and spinning down new hosts and all that, so that OpenShift can control and maintain these hosts, or if you need to do it yourself, you can do that with Kubernetes. And OpenShift v3, just to make clear, is voice supported on standard operating systems like Ralph, so on those Atomic hosts. And it's the Kubernetes orchestration that makes that feasible and makes it popular and possible. So with that, I had some demos that I was going to run through just to show you a little bit more on it. And if you want to find out more, the place to go is productatomic.io. But also, Red Hat Summit's coming up. I'm going to be there, and I'd love to talk about this stuff. And so if you have questions, come find me. So I'll stop for just a second and see if there's any questions. So thanks, Mike, for that. And I'm just shamefully saying that BlueJeans is failing us with chat and recording today. So I'm not quite sure why it's not doing it, but I'm getting little pop-up message from their admin saying it's not working. So... That's a bummer. It is a bummer. So actually what I'm going to do right now is I'm going to turn the video on. I guess I can do that for everybody and people who wanted to have a question could actually raise their hand. Let me see if I can do that. Diane? Yes? Diane, I don't know, but we all got a message from BlueJeans that said that they have a problem. Yeah, I got that too. So I'm going to actually... I'm going to just... If you have a question, unmute yourself. I'm going to unmute everybody right now and just see how that works. And also I'll say if anybody wants to type into pound-open-shift on IRC, on freedode.net, I'll hang out there and see if any questions pop up as well. So I'll free them to pick me. Just one of those days, guys, when the technology actually fails you. So the way you could ask a question right now is you would take yourself off mute and right now David Chia is the only one who's off mute. And ask a question. So I'm going to leave it open for a few seconds, mute minutes here. And Mateus is off mute. Steven Housty is off mute as well. But other than that, no one else has taken themselves off mute. Any of you who are off mute have a question. If not... Hi, it's Jared. I've got a question. Go for it. Last month, the C-groups developers did a presentation at Bloomberg about their re-implementation of C-groups in a Linux kernel. I was wondering how close you folks are tracking that and if you see any major re-architecting coming, let's say in the next 12 months, 18 months, as they re-implement C-groups. So our C-groups implementation, I think, is largely... We're trying to move the configuration and ownership of that more towards system B. And so the way that C-groups is controlled will change a bit, hopefully for the easier. And we are tracking it very closely. One thing I didn't mention is that on Product Atomic, we have a very aggressive release cycle that we're trying to work through, especially in the Fedora and CentOS worlds. But it's also tripping down into RHEL. And the goal there is that Atomic hosts, as a whole, will be much closer to what is upstream than what our traditional RHEL host is today. And the reason for that is so that we can pick up some of those newer features like the new C-groups implementation, but also because if you're using Atomic hosts, then something... The containers that really matter to you are found in Docker or Kubernetes. Both of those projects are moving very quickly. And so we do want to pick up those newer features earlier in the cycle instead of later. That's great. That makes a lot of sense. I bet you've got a follow-up. Go ahead. How would you, in a sentence or two, how would you compare the Product Atomic to CoreOS? I think, to me, they operate in very similar spaces. So CoreOS is very focused on the minimal operating system required to get a containerized environment up, whereas I'd say Product Atomic is more focused on some of the enterprise and scaling use cases. And so if you want to take a look at CoreOS, it's definitely out there. Take a look. But for me, I think we're trying to do... I think that we're a bit more enterprise-focused today than CoreOS is. And that's largely, I think, an artifact of Red Hat's traditional customer base. Makes a lot of sense. So does anyone else have a question? If not, maybe you can roll into your... Yeah, I have a quick question. Go for it. So what's the best way to get started with Project Atomic? So there's two... That's a two-part question. One of them is, what if I want to develop Project Atomic and that would mean making changes to OS Street or Kubernetes or Docker? And to do that, I would probably get started... I would join the Fedor SIG mailing list and the Atomic mailing list and just mention what you want to do and get started there. The next question of that is, what do I want to do if I want to create images for this emerging ecosystem? And for that, I would recommend taking a look at the CentOS spins. They're going to be a bit more stable than the Fedor spins, but also new enough that they have all the interesting features like anything new that might come about with super privileged containers, that sort of work. And so, on that side, I would start there. You can always ship your images upstream to Docker, the Docker hub, or if you're an ISV or something, you're also welcome to get started with Red Hat's partner ecosystem as well, and there's information on the website to learn how to do that. I guess I was also... How do I... What's the easiest... If you're being a developer, if I'm being a web developer, and I just wanted to try Atomic and try putting some Docker containers in it, how would you recommend that learning class? So for that one, I would go to projectatomic.io. There's a getting started page there. And I'd probably download... For me, I might download the CentOS version and give that a look. Thanks. Anyone else want to step up and ask a question? All right, why don't we roll into your demo? I think I'm going to give up there. We'll have to save that demo for another day. That sounds like a plan. We'll get you back in Cockpick's work and see what the next revelation is from projectatomic. So if there aren't any other questions, I'm going to let you all get back to your work today because everybody's got lots on their plate as we're coming up to Red Hat Summit, and I'll do a shameless plug for that. You'll see more about all of these projects, OpenShift, Project Atomic, Kubernetes, Docker at Red Hat Summit, and it's well at Dev Nation. So if you're coming, please let us know and hopefully we can get a commons meetup as well at the Red Hat Summit. So we'll see you all there. And we'll probably see you all next week because we have another session coming up on schedulers with Abhishek Gupta. Next week, same time, same place. So thanks again, Mike, for joining us and we'll see what we can do about getting recording out of this one. Take care all. Thanks, Diane.