 Hi, so my name is Aditya, I contribute to Fedora project in various capacities. I have been a package maintainer, you know I have contributed to Ansible, Nagios, Puppet and these days I am running around with Project Atomic and certain other cloud based utilities. So, this talk is going to be about slightly introduction of what Project Atomic does, what are the components it has. I will try to show you a couple of demos if possible around Kubernetes, around cockpit. So, that is the whole agenda, right? Sounds good? Okay, so we are going to discuss what is the problem that we are trying to solve with Project Atomic and the entire container ecosystem. How Docker helped us and now that Project Atomic is here, what are the components which can help make this scenario even better. And we are going to look at all these components, right? So, what exactly is the problem? The problem is that my production systems need to be homogeneous, my environments need to be in sync. For example, if something is working on my laptop, it should always work on QA's machine, on staging environments, on production environments. That is what the goal is, that it should work. I need to ship my environment to colleagues, say for testing, which is a very tedious job. You cannot just, you know, ship your physical hard drives or even virtual machine shipping is not that easy, right? We need to have a stable environment to run containers. We need to have a very lightweight environment. The environment needs to support automation, orchestration. It should be able to give you the kind of flexibility which you expect out of any automated system. And managing host itself should be very, very much less of your concern. Your concern should be more around managing the containers itself and not the host, right? So, that is what the goal is, right? So, we have talked a lot about Docker. I'm just, I'm not going to go into details. We know that Docker is, Docker provides us lightweight Linux containers and it boots up really fast. It has API. You can, you know, incrementally build up containers. You can revert them if required. You can do a whole bunch of stuff. And Docker has also introduced us to Docker registries using which we can share the images with our colleagues. We can set up our own private registry so that, you know, you have to safeguard your corporate secrets and all those things. So, all these things are there. I'm just not going to go into details because this is not a Docker talk. However, if you want to, if you want me to talk about anything specific, you can please, you know, you can raise your hands and we'll discuss more, right? Right. So, this is what Project Atomic is for me. So, it's, it's an umbrella project which combines a lot of other projects that are running in container ecosystem right now. Okay. So, projects like Kubernetes projects, project like now that there's newly cured cockpit project. There are so many other projects which help in running the containers. We are going to try to bring them all together so that they can work in peace and harmony. And we can get maximum benefit out of that, right? Right. Project Atomic is not yet another project to build yet another Linux distribution. That is not what the goal is. So, if you're thinking that probably we are trying to build the next set of Linux host. No, that's not true. We're not doing that. We are trying to build better tool chains which can coherently interact with each other so that you can get, you know, maximum benefit out of it. Right. So, what exactly is Atomic host? Atomic host is a very minimal operating system. It has very less footprint. It boots up very fast. It doesn't have all the tools and utilities which are shipped by default if you download a Fedora image. It will not have a pretty desktop UI. It doesn't ship with Noam or KDE or any of your desktop environments. That's not what Atomic host is. It is optimized for containers. It uses upstream RPMs of Fedora, CentOS, Red Hat. So, you can trust those RPMs. You trust all those, all that code that it has been verified by appropriate QEDs. It supports Atomic upgrades. A lot of people know what Atomic upgrade is. Do you guys know what Atomic upgrade is? Should I explain? Yeah, so what happens during an upgrade is traditionally you download a bunch of RPMs and you install them. Say if something happens in between, if there's a glitch, maybe half of them will get updated. Other half might not get updated. So, that's not an atomic way of doing things. In Atomic, the entire transaction either goes through or it fails. So, what happens is that either your entire update will be applied or nothing at all will be applied. So, that ensures that your operating system is always in a very healthy, good, stable state. That's what we mean by atomic upgrade. It has very minimal package set which reduces the attack surface from security standpoint. Very few packages are actually there. Now, it means that since the entire update process and the installation is atomic, you cannot do YUM install or DNF install my favorite package like Scream. That doesn't work. I mean, you cannot install your favorite packages. That's what the goal is. We just want you to focus to run a set of containers and not utilities around it. So, for example, Emacs, no, it's not there. Scream, not there. Your favorite Apache or Ingenix or anything like that, not there. Sorry. So, what are the components of Project Atomic? Docker, of course, it's the core. That's what we use to run the containers. We have RPMOS3. We have SystemD. We have Pocket, Kubernetes. Atomic command is very recent. Well, not very recent, but it's on the development edition. Then we have SPC. I think Dan is going to take a talk on SPC tomorrow. Today or tomorrow? Four o'clock. Four o'clock today. So, that would be very nice. Right. So, let's talk about the components. What's RPMOS3? So, RPMOS3 is a versioned file system tree, which is bootable. So, for example, I'm sure you have worked with some sort of version control system while using your coding practices, right? Like Git or SPC, you know, something like that. So, this is kind of a Git for your operating system files. Entire file structure, which includes slash boot, slash ptc, slash, you know, user bin, whatever, everything is versioned. So, the entire file system, if goes bad, you can just point it to the previous version and you can achieve a very robust bootable system. There are no halfway-upgraded system because either the entire tree, thanks to the atomic transactions, either the entire file system tree is updated or nothing at all, right? And as I already have mentioned, we do not package stuff yet again. We use the upstream RPM from Fedora or CentOS or Red Hat and build the file system tree, right? Worth noting is that almost all the directories are read-only. So, you cannot write a file inside, say, user bin. Only slash etc and slash var are writable. Slash etc because that's where you'll keep your configurations and slash var has a lot of data like, you know, your container images, things like home folders of the users, they are symbolically linked to the slash var directory. And so, for example, if you created a new configuration in slash etc and you want to update the system, your configuration is going to be preserved by using a 3-way merge of the files. So, all the new parameters will still be there, but your configuration will not get destroyed during an update. Slash var is not changed at all during an update, so you're saved there. Anything in slash var is just kept as is. Our next component is systemd. Systemd is the init manager. It spawns all the programs during startup and all, manages everything, all the services and all. It's highly modular and very powerful. You can check out Lenard's blog here. I think I'm not a very good authority on talking about systemd. But yeah, so systemd is default right now on almost all the distributions that I know. It also uses cockpit. Now, cockpit provides you with a pretty web-based UI to manage your servers. How many of you have used Webmin? I believe one, two, yeah, quite a few. So, it is what Webmin should have been. Very less security holes. I'll just give you a quick sneak peek on how it looks. Can you see this? Right, so is it visible at the end? Okay, great. So, you can see that it's showing me the CPU utilization memory. I can add more servers to it. There's the dropdown. These are the machines that it's managing right now. Let's see. I can also download more images subject to internet, of course. Oh, I already have a bunch of images here. Yeah, so I can manage a bunch of hosts using a pretty web UI using cockpit. Installing it on your Fedora workstation is just dnf-installed with cockpit. Installing it on atomic host is slightly tricky. You need to install a container. Container is actually, I believe it's an SPC because it needs certain access privileges which are not there to a default container. Best bet is to use atomic command, which I'm just about to show you. Best bet is to use atomic command and get your cockpit running. Right? It's a kind of architecture. You need a client even on whatever machine you're trying to manage. So that was cockpit. And another project I want you guys to know about is Kubernetes. It's a project called Google. They have written a lot of code to automate and orchestrate containers. So using this, you can spawn any number of containers very quickly. The idea is that Kubernetes, you just provide a bunch of machines to Kubernetes. Kubernetes will decide where your next container should go, what kind of resources it should have. Just in case Kubernetes sees something going down, something crashing, it's going to bring that back up. So it's very useful to build all tolerant clusters of containers which is going to see a quick demo. There are a lot of examples available in Kubernetes GitHub repo as well as there's a very good documentation on how to set it up. So that's very easy and quick to do. Let me just go through a simple demo. So Kubernetes has a client and a server architecture. So right now the API server which is responsible to expose the API and the scheduler are running on my laptop and the client part of it is running inside a virtual machine. So basically I'm going to try to spawn up a container. We are going to server HTML page and then we are going to forcefully try to kill the container so that we can emulate a crash. And then we'll see how Kubernetes reacts to the situation. Sounds good. Let's just see if our node is ready. So I have got a node named FedNode and the status is ready. I'm not showing you how to configure it because it's very well documented. Just follow the documentation kind of thing. So I have already got this. Am I audible without the mic? Great. So this is my specification of the container that I'm going to boot up. Basically what I want to do is I want to boot up this container known as adding many a slash two entities demo v1, v.1. The command which I want to run is slash run. The container boot and the host port is what ideally you should have done in minus p, I suppose, of Docker command. Now, right, let's do it. Right, so my pod has been created. Let me just see what's the state. Right, so it's not booted yet. Status is pending. The data is actually... Okay, yep, the image is not loaded. It came up actually and then it went down again. And then it came up and then it went down again. All right, let's see what's happening. So this is the version machine which is running. All right, let me try. Okay. This is embarrassing. Okay, we'll try to come back to this in a minute because I think my machine will fall. Right, I'll try to get back to the demo again. Let's move on to the next thing which I have, which is newly queued project. So what newly queued project does is it provides a specification so that you can package your container, containerized application. So containerized application, I mean to say, is that it can have several, you know, by containers application, I mean to say that it can have several containers and probably start it in a very specific way. So for example, you might have a container which is, you know, you might have a containerized application which uses, say, a bunch of containers, say four containers, web servers, database servers. So you need to provide way of how it is going to be linked, what kind of volumes will be mounted, so on and so forth. So you need to specify that environment. Now communicating that as a documentation can be a problem sometimes. So newly queued helps in combining all those things and it ships off the entire information as labels. So what happens is atomic command is able to read those labels and is able to replicate the entire instruction set. So all the port bindings, all the, you know, disk volume mounts, everything can be specified in labels and all those commands will be executed. So packaging complex applications become much more easier in that case. Using atomic command, you can also upgrade your atomic host, roll back to a good known state in case things don't work out for you. You can also use atomic command to download fresh images from Docker Hub or from wherever your registry points to. Other than that, there's a whole bunch of functionality that is coming up on atomic command. Dan mentioned that atomic scan very soon would be materialized, so you'd be able to scan the images for security vulnerabilities and so on. And SPC is super privileged containers. Super privileged containers, they have higher elevated privileges than our normal containers. They provide debugging tools that you can write on syslog and all those things using SPC. There's another workshop on SPC, I think you should attend that. Oh, I think it's up, right. So right now what I'm trying to do is trying to see if my container is up or not, but the command is not returning any output. The CPU is just free of a bunch of things. Yeah, my CPU is totally bombed over right now. The container is up. The container is up. I think I can go on with the demo, divert the demo to another thing. Ideally what should happen here is that the page which I'm trying to serve here and this image should stay because it's dying because my Docker is dying again and again due to CPU. But now that we see that my containers are, Kubernetes demo v4.1, if I try to stop it and when it does stop, Kubernetes is going to take a notice of it and it's going to try to bring that container back up. So it's very useful in building up extremely fault-tolerant environments. Yeah, try to do it up a page. Yeah, so my container is not running right now. Kubernetes will take a few seconds and try to build it back up, right? Yep, so my container is back up. There it is. It's not here. So Kubernetes matches that sort of interaction. It also helps you in building very big containers, very big classes spread across various host posts. Right, so that's about it I have. Do you guys have any questions on anything? Any of the projects, any tools? Yep. So you mentioned Neurikubel. Where can that run? Is that only a format that we share anywhere else? So Neurikubel is a specification. It's not an implementation. It's just a set of rules that that's how we are going to communicate between different containerized environments. So anybody can implement that specification set. Atomic command is one such implementation. And I'm sure in future we are going to have many more implementation specific to other container engines. Any other questions, guys? Great. Awesome, then I'll control it. Right, thank you.