 I want to talk in Spanish, because I haven't spoken Spanish for ten years, but I would be welcome if you want to talk to me in Spanish after that. Let's try. First of all, most of you don't know me, so let's have some kind of presentation. I'm a software craftsman, so I'm a developer. I work on the ADO engineering at Red Hat, ADO being the Red Hat OpenStack distribution. I also do, on my free time, a lot of open source. I'm a federal developer, part of the engineering committee. I'm also a sensual developer, and I'm the new project technical leader for the new OpenStack LPM packaging project. So I'm not paid to work on any of these projects. I've been interested in cloud computing since 2006, so obviously this was a topic that interested me for a while. So as a back-end person, I wanted to try all these solutions and compare them. And if you want to get me after that, but you probably won't, privilege outsources. I'm a freak spice freak. So let's go on. Have you ever heard about Docker? Okay. Who has never? Great. I won't spend time on this. This is my co-worker, Dan Walsh, which is S.L. Lennox and Docker Superior telling you how to properly pronounce Docker, which is Docker. Well, I can't. It's in the US Middle West with weird accent, so I can't reproduce it, but anything. So let's say something strange about containers, because there are still people who confuse what they are. They're an operating method for isolating processes and Linux systems. So they provide sandbox to run applications. So another mistake that people make is confusing containers with Docker. Containers have been around for two decades. They are called Solaris Zones in Solaris. There are free BSD jails. Even the Linux, we have plenty of containers technology. We have OpenVisert, LXC. We have vServer and all. The only thing interesting is that Docker relies on native kernel infrastructure called control groups and namespaces. So you don't need the out-of-three patches to provide isolation features. They're not for virtualization. They don't emulate any hardware. You have to share a single Linux kernel for all your containers. So that's it. The emphasis is that you have a sandbox thing of truth on steroids. So Docker, so if containers are not new technology, what is Docker about? It's about industrialization. It's about build-ons run everywhere because you package your application in a container image that you can ship everywhere and it will run the same. Be it on your developer laptop, on your production server, or on a VM, on a cloud instance, it will run the same. It also allows you to bring consistent continuous delivery because you don't have to expect any weird issue on production. If it works on your machine, it works on production. You can do real ability through layered image. If I want to provide a database image, I can take any image in the Docker app, for instance, the Ubuntu one, and then build up my database image on top of that. And when the Ubuntu image gets updated, I won't have to redo my own image. It also goes on with the trend that the industry is moving from monolithic behemoth to microservices architecture. We're trying to break large, big enterprise applications into smaller services. But what is interesting with Docker is that as you ship runtime with your application, you can also integrate gracefully your legacy apps and migrate them progressively to microservices without having to keep your old infrastructure aside. Another topic, which is a bit annoying for me because I'm paid to do RPM package, is that it brings the question that, is packaging still relevant in a container world? My take is that, yes, it is, not just trying to save my job because it also allows you to build more easily your container image by using RPM. I will speak about later another project which relies on RPM package to do that. But it's also reusability, but it's more continuously more geared toward end user. For instance, I won't have to do, if I have an RPM package for my application and I do, and I build up a container image based on that RPM package, it will work the same on the DBN Ubuntu or Slackware system because I won't have to redo the packaging. It's also a different, you can also have every grace to ship your application. And now, so Docker is a great technology to ship application, but it's a container. You need something to run these containers. So we have to rethink about how we host this. So you can use your usual Dignore Linux distro with Docker, it works, but, well, if you run your application and maybe you want not your system services in container, why bother with a full-fledged Linux distro? So you can go toward the minimum distro like boot to Docker, but boot to Docker, even its maintainer will tell you that it's not made for production, and it may not provide also all of the tools needed for administration. You may need. Also since we're moving to microservice architecture, there's an important point is that you want to have cluster management tools, so you have, you still need something that is not too minimal. The advantage of microservices is that you don't need a large hardware to run them. You may want to use commodity hardware. And as it's very fast to start a cloud instance, a container, you may even want something that you can shut off instantly and bring it on very in a few minutes. So why not using cloud instances? So it doesn't feel right, because we're still using the same tool. We are changing our architecture, but something is wrong. So let's try another way. Let's take up our requirements for a nesting container system and try to see if we can be very something that fits or we can build to work for us. So I'd like also to take time on few things we haven't spoken for. Containers application have now transactional and to make updates with Docker. So you can update small bits of your application and do it transactionally. And if it doesn't work, you can roll back, which is cool, because you don't want to break your production system. And the application runs in the isolated sandbox. So why not my system service system? If they fail, that shouldn't fail the whole system. And starting and shutting down containers is cheap, so we want maybe the same. So why not applying this to our hosting system? And we want that, obviously. So we want a minimalistic system, so elastic, because we are engineers. We like fancy stuff. Or our Docker wouldn't have taken this fast in the community. We want application shipping in containers, which means not shipping application through packaging. System services should run in containers. We'd like to have an insecurity through isolation mechanism. Because remember, I said that containers are not for virtualization. You have less isolation than using virtualization, like KDM, XAM, or IPV. So we may want to use some security module to provide isolation, like SLNX or ALPARMOR. And all the other ones, I want about too many to send to your list. And we want transactional atomic system updates. And also native clustering management. So surprise, it's already there. We, in fact, the earliest system in this list came out nearly six months after the first public release of Docker. So we have CoreS, Project Atomic, Snappy Ubuntu, Fortun, RancherOS, and plenty others. I focused on these five ones because the three on top are the main ones, the most major ones. Fortun is kind of interesting because it's brought on by a leader in the virtualization market. And RancherOS is very specific. So I find it very interesting. So they all comply with the previous requirements. And they also all use common components, like Kubernetes, TCD, CloudNet. There are variations, of course, because if they were all the same, well, we wouldn't be there. So it's important to see that they are all a major change in the way we handle infrastructure. So let's review a few components we still have sometimes. Sometimes, so let's move on. Flit is a distributed internet system or a clustering management tool. If you hate system D, sorry, guys. It leveraged it. It used the init system. It used its login feature through JournalD, socket activation, which is awesome. If you hate system D, I don't understand because just socket activation is the color feature you want for infrastructure. Does anyone knows what is socket activation? So some people don't. Socket activation is starting an application when you try pinging a socket. For instance, I send a request on the port 80, and it says to system DA, someone send a request to that port. Please start the HTTP demon. And hold on to the request for me until the HTTP server is up. And when it's up, give it the request and serve as usual. So it will allow lazy starting of services, which is awesome for my infrastructures. You want that, because if a service is not used, why bother starting it in the first place? We system D is not used to start processes within containers. Actually, system D within containers doesn't work. You even have a fake system D to allow some processes that relies on some system D features to run inside containers. So you start containers, which will run one process, which is OK with the Docker model, which is one process per container. This is the model. You can start multiple processes, but this is the model. Fine-grained scheduling and machine discovery. But machine discovery relies on the next component, which is the TCD. TCD is a world beast. It's mostly data store-continuing ephemeral data in unit files, cluster presence, unit status. Yeah, thank you. It provides service discovery. It's basically provides synchronization primitives. I like to think of it like a zookeeper, lightweight alternative, but it's still very specific to the container's world. We have also cloud in it, because if you start on-demand systems, you can't configure them. You can't pass this configuration, because so you use a demon called cloud in it that will re-run on start time and retrieve data from the endpoint to initialize your cloud instance, or your hardware instance where it's on both. And it's more or less a standard. It's a tool built by Canonical. And it's very useful for stateless systems. Kubernetes. It's basically container orchestration by Google. It's not inside the hosting system. It's an orchestration tool. So basically it will be used to provision your cloud instance and your hardware and run your hosting system. And then it will start your application within the system. It handles physical OS, Google Compute Engineer, OS, Mezos, Chorus, Machines, Atomic, OS. It provides scaling, self-healing, replication mechanism. Basically, I like to think as the lingua franca of the containers world. It's one tool to rule them all. So let's review our container, because we have now less than 10 minutes. Chorus, it's a derivative of Chromium OS, which is itself a derivative of Gen2. It's the oldest one in the list. It's relatively major. They created the TCD and fleet we spoke about before. They also created a competing container engine called Rocket, which is more or less a clone of Docker, but with standards. Well, it recently changed in DockerCon Chorus and Docker Inc. recently announced that they will be working toward a common specification for containers. Does not support software installation on the host. So you have to run a privileged container called Toolbox, which is federal-based, which you can use to install anything you want for debugging purpose. It has not additional security isolation, which is, to me, the only default in Chorus user story. And they are still questioned now if they are still going to use Rocket or the standardized container runtime. So if you want to test one of these Chorus, it may be a save date. The other one is Project Atomic, which is funded by my employer, but I don't work on this project. Basically, it's taking all our family of distribution, Fedora, Rails, and Hosts, and we provide for each of them a variant, which is called an Atomic Host. The main difference is that you don't use you. We use something called LPM Hostry. Hostry is like Git for binaries, if I want to sum up. So it originates from the GNOME CI platform. So it's used to maintain your system as in layers of image, much like containers image, which is nice because it nicely fits with the Docker story. And RLPM Hostry is an additional layer that allows you to build that image layers from packaging. So either you take from Fedora on CentOS website the Atomic Host image and deploy it on your machine and run it and get updates from the website, from the web, from the repository. It's not repository. I don't remember which proper word, but from our website. Or you can build your own image with your own packaging and package set using RLPM Hostry. So it's kind of a middle ground between traditional system and modern company host systems. And it's pretty much the same as the chorus in the plan of using the managing application, the managing application on the system. It has mentored quite recently, but it's pretty much on the same level of support as chorus now. It should, at least. And we have additional security layer with SLNX, which provide more isolation. Remember my co-worker, Don Walsh, which is a guy who implemented it. And that's quite interesting, five minutes. And then let's go to review the other one, Ubuntu, snappy Ubuntu, which may seem the most recent one. But in fact, it's based on old work from Canonical, the work from Canonical on phones called Ubuntu core. And also they are just enough operating system, which already use LXC containers to isolate apps. They use AMP more to enforce isolation. Which is interesting in this, in their model, is that they have an additional abstraction layer, which is frameworks and application. Basically, frameworks are containers and join. And applications are the containers themselves. Which is interesting because you're not limited to containers because you can consider someone bringing a framework to run Android applications on Linux system, for instance. And have it run on your own system. Another variation is that it uses LXD, which is another, well, they call it an app advisor. I would call it Docker clone. But that's a matter of definition. But it also can use Docker because, well, LXD being very recent, they don't know where the market is heading. So they keep both ways, both technologies. Photon, Photon is still a tech preview from VMware. It's based on Fedora. It plans to build the RPA monster. So I think in the end, it will be pretty similar than Atomic. Get interested in it because they have a YUM compatible package manager called TDNF. It's still very new, not much. I would still follow it because it's not something you can use in production. It's still also tied to VMware products who can run this now on virtual box. I reached to run it on KVM. But it's still very recent. Rancho OS, my favorite one, really, because not because that's the one I would use it in production, because it has a very much more radical model, which is very interesting. It has an extremely minimal footprint. It doesn't use SystemD as paid one. It uses Docker. And it's also you. And it runs Docker for another Docker demand for application in a Docker container, which is Docker inception. And it is amazing because it brings some kind of fault tolerance. It's kind of leverage fault tolerance for Docker. Sorry. The spotlight is very strong. And it's kind of disturbing me. Sorry. And well, so it's very different from the previous one. So that's why I find it very interesting. They are geared towards embedded device and internet of things, which is now the next big thing in the tech industry. So it's very interesting. And it works quite well. But I wouldn't use it in an industry IT infrastructure because it still lacks some features like clustering. One minute. In the end, I'll be not too late. I provide you so much short choice now that you must hate me. And I didn't give you the right solution, because I don't have one. Because this is an emerging model. Things change fast. Atomic cost one year ago was completely different than now. And the same for CoreOS. Now I think we have reached maturity. So we can start using them in production. So we still don't know which will be the right model. I think that CoreOS, Atomic and Snappy, are maturing. In terms of resources, they are pretty much the same, be it disk resource, memory usage, same. I think security-wise, Atomic and Snappy have much consistent user story. Because they use additional, they use mandatory access control security models to provide enhanced security, enhanced isolation, sorry. So sorry, CoreOS, but you have to fix that if you want to stay relevant in that market. I'm sorry. I'm going a bit wrong, but at the end. And RancherOS is interesting, because it targets a niche market. So I think it will far pretty well. Because none of the three previous ones will be targeting internet or things. Maybe Snappy, but, well, knowing Canonical, they will utterly fail, if they do. I like trolling. And Snappy and Photon target different containers technology through a steel support docker. It also stress that the importance of cluster management, because you see communities can handle CoreOS and Project Atomic in the same infrastructure similarly. So it's not very much less coupling on the base system. So it doesn't mind. So I've been wrapping up for this, and it's time for our Q&A. So thanks for my co-worker, Michael. My friends, Marianne and Mathieu, for profiting. Dr. Sheldon Cooper, for his inspiration. Credits goes to the Babantary show for the United State GIFs. And thank you guys for staying there, because I may not be the best speaker around, but I appreciate this. Thank you, Michael. So we do have time for a few questions, maybe about two of them. So anyone has any question? Hi, so I take it that RPMOS tree uses a different binary format for the images. How many container images are there out there? Because I noticed the containers you built for Docker can run on most of these things, but I take it not all of them. Basically, Docker containers, I can run them as I'm building them now if I'm using Docker and Docker build to build the containers. I can run those images on most of these, but I understand not all of these. Can I, is that correct? How many formats, binary formats are there now? Oh, you mean formats of images? Yeah, the interesting thing in Docker is that they haven't standardized the image format. This is, even if they haven't done that, we wouldn't have two built-ons run everywhere. I think, that's why I think Docker is about interchangeation. I did continuous way before Docker happened, but the pain was doing these images. You, when you do an image for OpenVisual, you weren't even sure it would work on a different OpenVisual host. So, when Docker built that, built that, they say the word. That's why they created so much noise around this technology. I think there is a question here. Do you have any, as you work on RDO, I want to understand, do you see a solution for supporting external storage system in these container environment, just like it is in the OpenStack world? Could it be standardized in some way? So, if I have an OpenStack storage system, I could use it via a container? Okay, I see two questions. The first one being, can we use an industrial storage system for containers? And the answer is yes, we can. Docker has some storage drivers who allow that. So, you can do this from Docker directly. Since you mentioned OpenStack, I see now the question is that, how we mingle the containers in cloud infrastructures? So, we have various initiatives to bring containers to the OpenStack world. We have the Nova services, which is the Parvisor service in OpenStack, has now, has rebooted the Nova Docker initiative. So, it would allow you to run directly containers from OpenStack to using the same infrastructure as usual OpenStack. So, you would be able to upload images in the image service and run them too. Does that answer your question? If you want to again talk after that, I have time. I have time. Yeah. Thanks again, Eichel. You're awesome, guys.