 Hi guys, my name is Aditya, I work for BrowserStack as infrastructure lead and I also contribute to Fedora project as well as other open source utilities around. So today I am going to talk about running containers. How many of you actually know about Docker, right? So you guys know that, you know, running Docker is cool, right? Okay, so now we are going to see certain things about running this cool stuff. We are going to check out what the problem is when we talk about running cool stuff. By project atomic and its associates will help us and if possible I will try to give up a short demo, right? Okay, so the problem is that now that we have figured out one tool to rule everything, we need a stable environment to run that one tool, right? We need to support automation because let's face it, containers are good but like any of the software containers can go wrong, they can crash, they cannot boot as per your requirement. A lot of bad things can happen in that area. And now that you are managing containers, you are managing the applications inside it, the idea of the entire exercise should be that, okay, you need to focus on containers not on the host because containers are the thing, you know, cool stuff. Okay, so this is project atomic. So Chetan Bhagat very rightly said this about project atomic. Come on, dead audience. Okay, so J.R. Tolkin said this about the one ring. Okay, so this is one project which this is kind of an umbrella project which Redat and some of its associates like, you know, Fedora project in CentOS are trying to build up. What essentially we are doing is we are trying to bring together a lot of projects who are present in the container ecosystem to, you know, build better tools so that, you know, we can reduce our pain of managing containers. So the first component which I'm going to introduce you to is the atomic host itself. Now atomic host is a very minimal operating system. It's based out of the RPMs which are coming from Red Hat Enterprise Linux or CentOS or Fedora Linux. So basically you get the same Red Hat in CentOS and Fedora quality which you are already familiar with. It has robust atomic upgrades and system D and rollback processes. Now with atomic upgrades, you guys have heard of atomicity in the nature of databases, right? The transactions are atomic. How many of you have heard of this term that transactions are atomic in nature? Quite a few of you. Yeah, so I'll take up on that. Just know that right now we are working towards a very robust atomic upgrade systems. We are using system D which is to provide a lot of, you know, initial booting up and managing of the system. And right now the atomic is in good enough state that it can be deployed easily on cloud-based environment like EC2 or OpenStack. It can be virtualized with any of the hypervisors like QEMU and you can install it on bare metal as well. It includes RPMOS tree. How many of you have heard of RPMOS tree? Okay, okay. I'm sure you guys are working with GNOME. Okay, right. So what RPMOS tree does is it gives you a bootable, immutable version file system trees. So you guys have worked the git a lot. You know that all the files are versioned, right? So we take it and we add the concept of booting of that entire file system. So now your file system becomes versioned. You make a change to file system, you can revert it. Imagine that you can revert an entire operating system and you have some idea of what atomic is able to do. It's composed from standard app in that something which you've already covered. Atomic upgrades in rollback means that, see, managing something which is data center, which is in data center and production ready. You don't want anything to fail in between, right? You don't want your updates to stop in between. You don't want to, you know, any partial updates, packages going bad. Project atomic gives you a transaction-based upgrade. So if you're upgrading your OS, your entire OS will be upgraded or nothing at all. Similar to what you get in database transaction systems, right? So when you swipe a card, the amount is deducted, either the full or nothing at all. There's no partial thing there. So that's what is happening with operating system. To achieve this, we have done a lot of immutation here. So by immutable, I mean to say that most of the file system is not writable. If you try to build a binary and insert a user bin or something like that, you won't be able to do that. It's all immutable except for war and ETC. Everything is immutable. ETC because, of course, you need to configure stuff. And war because you need to store something like your Docker images or your home files somewhere. So these two are the only directories that are writable. SystemD. SystemD, I'm sure a lot of you have heard a lot about it. Just a brief introduction. It's a system and services manager for Linux. It has already replaced the traditional Linux system in CentOS 7 and Rails 7. Fedora is shipping it with SystemD since quite a few releases now. It's highly modular. You can write modules for it for almost everything that an OS does and plug it in. It's good to go. SystemD can, you know, talking about SystemD can take like good R. So I suggest you guys go to 0.8.d. That's Lenard Portraying's website. He has extensive documentation on why to do this and what not. Okay. There's nothing about not. Why to do this? And it also includes Cockpit. So if you guys have ever worked with Webmin, something like that. Webmin from like a few years ago. Right? A few of you. So Cockpit project was developed independent. I mean, it was independent of entire Docker ecosystem. But it somehow ended up being shipping with Project Atomic. What it does is that you can attach a lot of host to it and you can manage them. For example, it will give you pretty graphs about how much CPU you are actually using, how much RAM you are actually using. You can manage it in the sense that you can turn on and off the services. You can check out things like what all containers are running on your Project Atomic host. You can check out what all images you have available. You can download and run even more images. You can check out how much resources your containers are using all from a pretty GUI perspective. So that helps a lot in visualizing how much efficient your system is actually going to be. This is like just one of the graphs. You can see the combined CPU usage of one of the systems. Memory usage. A few containers that I have here. I can actually drill down even more. I can click on the containers. It will go there. It will show me the utilization of container resources as well. All those things can be managed. And lastly, I would like to introduce Kubernetes. Kubernetes is a project by Google. This is to manage the containers running. So it helps in providing a very fault tolerant environment. So for example, let's say that your container can handle X amount of traffic and usually you serve 10 X of that. So ideally you would want at least 10 of those containers running in your infrastructure. But sometimes for unforeseeable reasons, the containers can go bad. They can crash or probably you can get massive spike or something like that. So in those cases, Kubernetes can help you very easily by scaling it up and making it fault tolerant. So in case one of them crashes, Kubernetes will observe your environment. It will detect that one of them has crashed and it will start booting it up. And within no time you will have your container back serving the traffic. So that helps the scalability cause a lot. You can create clusters of applications using Kubernetes. And eventually you can scale out an entire application cluster including databases, including your web servers and everything using Kubernetes. There are a lot of examples with Google Cloud Platform Repose. We can check out some very common examples about Nginx, WordPress and all those things. So these are some of the main components. There are more. Project Atomic is a very young project at the moment. And every week or every month we get more and more tools added. Latest would be NewlyQl. NewlyQl is a standard to make up these clusters. But let me not talk about it at the moment because just too hazy for now. But yeah, I think going forward we'll have more of that. Starting Atomic host you will need a cloud init server. Cloud init server, I mean, you guys must have been working with something of AWS sort. So have you guys ever thought that when you supply your key, your instance already has that? So that entire process is taken care by something called as cloud init, similar to cloud init. What cloud init does is that you feed it initial data and it will take all that data and supply it to the host. So for example, certain things like instance ID, IP addresses, public IP addresses, your keys, your users, they're all supplied to the instance by the cloud init data. So Atomic host also needs that. You can either run a cloud init server or if you are not in a state to run a cloud init server, you can fake it. So for smaller instances, for smaller deploys, this is what I do. I basically shove metadata in a text file and then create an ISO and make the virtualization tool behave as if that there is a cloud init server running. This is an entire process. I'm not going to demo this for now because this takes time. But essentially what you are doing is you're supplying it the basic things like what's the host name, what's the instance ID, your password, your keys and everything. And then you are generating an ISO image out of it. This ISO image will be mounted as a disk on your instance and the instance will read from there thinking that it's actually reading from a cloud init server. So this is a demo which I would want to do. Let's see. In the meantime, do you guys have any questions, anything? Yep, please. So in the same space right now you have two other interesting projects. Canonical Zubuntu snappy core and the most popular one in the space currently is CoreOS. Right. Especially with Fleet, HCD and Veeve. So what are the equivalents for Atomic to run something like Fleet which is super because of the distributed system D thing? Right. At the moment, I don't think there has been any work done in that direction to bring Fleet in Project Atomic. However, Project Atomic is working with a lot of the people which you mentioned. I think in near future you can expect Rocket to come to be in a state to run with Project Atomic. Isn't the basis of Rocket to get rid of PID-1 having system D to spawn the Docker process where they take the approach that Docker would be in a process? Right. So that's why I said in some time. We are trying to sort those issues out. Another thing which will come, so Docker is one of the things that Project Atomic does. Rocket is definitely the next thing which we are looking forward to do. There will be system D and spawn as well which will also help in containers. Now system D and spawn and Rocket, I mean, I know there is a direct conflict and collision, but... Given that it's driven by Red Hat and Lynard Putaring, so... Yes, yes it is. So as I said, it's like an umbrella project to bring a lot of stray projects together. Well, stray might not be the right word, but a lot of different projects who are working in same space together. So that's what we are trying to do. So yeah, Rocket, yes. XED is something which we already are using with Kubernetes. So XED is already there. I think Fleet I'm not really sure about. Right, so Kubernetes is what I'm... I'll try to demo right now. Kubernetes... So what happens is that you can run your Kubernetes API server on an atomic host or somewhere. Basically, your kubectl. A kubectl will run on the atomic host and kubectl is what is the tool which is going to run these Docker containers for you. So that's what the idea is. So in CoroS, you have a toolbox where you can mount a particular binary compiler like Python or PIP or any of the tools which automatically gets up one particular container in Ubuntu or Fedora. So this will eventually give us a command line communication on the CoroS mission. So is that Project Atomic have anything like that? Because I don't see any package manager in Project Atomic where you can install a particular thing and then just use it for some sort of an automation where you mount it through... Let's say Ansible have the same problem where CoroS, you need to do it with toolbox and then install the Docker-based plugin for that and then do an Ansible automation. But in Project Atomic, I don't see anything like that. I'm sorry, I didn't get it. Can you repeat that? Kubernetes relies on the ambassador pattern for HA and for distributed loads where you have something like v running in CoroS. So you have an L2 plane or an L3 plane on your entire fleet. Right. Okay, let me answer him first. So his question was around... Toolbox and CoroS and this. So the philosophy of Project Atomic is a bit different. The philosophy of Project Atomic is that technically you should never ever actually log in into the Atomic host. So modification of Atomic host, be it via package manager or be it via any other means is out of question. I mean, this is something which we don't want people to do. We don't want people to get into the host ever, ever. I agree. It's a lightweight OS. That's the reason you go for this kind of OS to implement on top of Docker to make it much more lighter. But when it comes to debugging these things, we need some sort of a tools where it at least give us a chance to debug and get the information back on the host. So in those scenarios, I would refer you to super privileged containers, SPC. What SPC does is that, for example, if you have a bunch of containers who are not behaving very nicely, using SPC, you can collect their logs, collect certain data like SAR data or logging data. And using SPC, you can have all that information ready for you to debug. So with your answer, it means to say that we can do some sort of things. But what if I want to do a configuration management through Ansible and I want to do nothing? The philosophy is that I don't want you to do that. If you want to use Ansible, you want to modify the host, then clearly you're looking at the wrong product. Then probably you should maybe go towards CentOS core or CentOS cloud offering or something like that. No, again, this project atomic is coming from the base of CentOS or Fedora, right? So if I can, when I do an update, if I install this particular tools, when I'm doing an upgrade, RSP, I mean RPM OS tree upgrade. So then I can install these tools and get the Ansible running, no? You cannot install, there is no concept of installing. The concept is that you can do atomic upgrade of already existing RPMs. What we do, what we can alternatively offer you is either a way to compose your own distribution, which is not very difficult. You can pick up CentOS as RPM, you can pick up Fedora as RPM. You can compose your own read-only file system. The whole philosophy of making it read-only is that you should not be able to install arbitrary tools. If you want to install tools, if you are not very comfortable with this concept and you want tools like probably things that has to be modified and worked upon, then I think you're looking at the wrong product altogether. You should explore more into CentOS core in that case, CentOS main operating system. If you want a non-immutable and if you want to install packages of your choice, then I think atomic is not going to do that for you. You have five more minutes. Do you want to take up more questions or do the demo? What do you want guys? Questions or demo? Demo. We'll take demo and then questions may be outside. Demo, I'll make it very short. Right. Do you see the font thing here? Last one. This is difficult. Is there a command line to do that? Okay. Right. So Kubernetes work in a very classic master-slave architecture. There is a Kubernetes API server. You can think of it as master. And then there are kubelets. So kubelets are the ones which actually execute. They communicate to API server, get the data, what kind of containers they want, and then they boot it accordingly. So right now, I have only one node with me. It's a virtual machine running on my system. What I want to do is I want to create a simple container which will just serve one image. So let me quickly do it. I have it ready. It just serves one image over HTTP. This is the standard definition. I mean, you can look it up on the Kubernetes repo. There are a lot of such examples out there. So this is the pod which is already running. I have already started it. And it is going to serve a static image. So this is the image which is being served. Now, what I'm going to try to do here is I'm going to try to kill that container manually. As soon as I kill that container, this image would vanish. So I have this stopped. It will take a few seconds to stop. The image, once it stopped, the image should vanish from here. So the container is stopped. Image is no longer there. This is a very generic scenario. This is a very true production-like scenario where your container can crash. Now, as soon as your container crashes, Kubernetes is supposed to detect that and bring it back. So I stopped the container. I'll show you again what happened. If you see, my container is back here. Right? So this is our Qubectl work. Very short demo. If you want detailed code or something, catch me outside or tomorrow at Cloud BUF. I can explain it in more details. Now, do you have any questions now I can take up? I think I have a couple of minutes more. No? Time's up. Okay, we are done. Thank you, Aditya. Thank you.