 So yeah, let me do the PGA, right? Yeah. Or there's a GMI tool here in the microphone. It's here, or you can grab this one and see what you like. Yeah, I think it's easier with this one here. Or I don't know. What with slides, that's fine. So we have 40 minutes and five people, or I don't know how. No, no, no, no, it will be yourself. It's OK with Yanduk, you know, we expect people not to use Yanduk. So in the end, maybe someone won't show up at such a high level. So it's not waiting for anyone. So it's VGA? OK. OK, so we can start. So I think we can start. OK, guys, so let's start our lightning talks. The format will be, each speaker has up to 10 minutes. And then we exchange without a break. And we'll be here until seven, right? It's six now. I don't know what time it is. So yeah, let's give our welcome to Michael. So thank you. My talk is seven unseeable tips and tricks in seven minutes. So about me, my name is Michael. But I don't have time to present myself. You can see all the other talk about it. So I guess that everybody knows what is unseeable. Please raise your hand if you do. Good, perfect. So I said that it will be a lightning talk. So it's packed with action and all kind of stuff. And the first trick is the action module that you can use with a variable called unseeable package manager, which is a nice trick, but it's deprecated with unseeable 2.0. And the way it works is that you can use that to install something on Debian, on Fedora, on a rail, because it's able to dynamically detect what kind of package manager you want using the setup and the fact, and create an action that depends on that variable, which means that you do not need to set specifically UM or DNF. So that's how I make a portable unseeable playbook. So tips number two. When people are using unseeable on the command line, you see that you need to give the group for the Hadox command and you have the group all. And it turns out that that group, well, you can use it for other stuff. For example, for a group variable. So we should have a variable that needs to be set for every possible server. You can use that. So this one is not a good tip because it's documented. But you can also use that group to get the list of all hosts on your system, which is quite useful, for example, for monitoring. Let's say that you want to declare an Agios server. You want to ping all the servers. You can either get a second copy of the list, which is quite bad because each time there is duplication, there is a risk of error. Or you can use that trick, which is just a loop. And when you add a new server, it gets added. When you remove a new server, it gets removed. Quite easy. Tips number four. It's also a tips with a variable. It's a nice one. So you get your Nagios server and you want to get the IP address of the remote system. How do you do that? Well, it turns out that you can use two stuff. The first one is a variable, well, an array called the host var, where you give the name of the server, which can be the name given by the for loop from the previous tips. And then you can use a nifty stuff, which is ansible underscore default underscore IPv4. So you get the default IPv4 for ansible, which is kind of equivalent to the public IP address. And ansible does that magically. If you get two IPs, one of them will be the good one. And that prevents you from doing complicated stuff like I do with other tools. So the fifth tips is more complicated. I think that a lot of people here are using CentOS or well. And you may have seen that CentOS was out two weeks after well 7.2. And for well 7.2, system D got changed. And I wanted to use a journal D and the export feature, but it was only on 7.2 or Fedora or newer distribution on the old one. So in a needy world, you should just say, okay, for well version superior to 7.2, let's use that. But you cannot do that with ansible. You can only give IP address or this kind of stuff. And it turns out that using group buy and a specific ginger expression, you can just test for the major version, create a group, and then apply journal D remote to that specific group. So you can filter on any kind of variable, so group and be done with it. And it's working. Now it's no longer a problem because all hosts are updated, but in the future, well, it will help. So tips number six, that's my favorite. That ansible by default work with SSH. That's fine. SSH is secure. Everybody knows how to use it. But most people do not know that you can use for other stuff. So for example, you can use it with SSH foot. You can use it with... What do I have? You can use it with windows. And well, you can use it with Gus Fish. So you can connect to a NISO file, modify the NISO file and write it. You can use it over Solstack. So if you already have a Sold infrastructure, like I do have for the Gluster project, and if you want to just get rid of this, well, you can start with ansible over the bus. And because, well, it will not be a good lightning tool if you didn't speak about Docker. You can also use it to connect directly to a Docker container and modify the inside of the container, which is a complete relation of the immutable principle, but I do not care. And the last tip is... Well, the last tip is not so impressive, but I didn't have time. It's basically that you are not forced to declare everything directly. You can add your own host. Like I can decide to make a playbook that connects to a remote host that I can't manage, but I can still add the host and make sure that ansible connect to it using delegate to and remote user. And I don't know if people can read from that that you can use the add host module. And that's it. If you have a question, well, it's a lightning tool, so wait for me outside. And if you want to contact me, see the same stuff, my email, my IRC, no Twitter, no Facebook, no LinkedIn. Thanks for listening. Yeah, very lightning. Okay, next one is Jakub or Alexander if he showed up. Okay, then Tomasz. Then we may have to repeat this like three times, both of you guys. Okay, hello. My name is Tomasz Kukral, and I work at the ICT department at Czech Technical University, and my lightning talk is called Fun with Kubernetes. Sorry. How we get to do Kubernetes? We have the prehistoric era, which we are using bare metal machines and deploying virtual machines manually and this horrible stuff. Then we discovered that we need more virtual machines and deploy them faster. So we started to use OpenEbula. And then we decided that we don't love our virtual machines. We just want to throw them away and use a different one. Now we are playing with containers and Kubernetes. Our deployment is very smart. We have just two Kubernetes nodes and one master, and a few running apps. I think it's two or three apps currently. But the deployment is backed by the self-storage. And the self-storage is running on seven servers, which means that we need ten servers to run our small deployment of Kubernetes. Sorry. Yeah. That's the map of our network. You can see here that we have the two physical nodes. They are called I1 and I2. And IMAS is a virtual machine. And it's running the controlling components of Kubernetes. Every component of Kubernetes runs in a container, except the kubelet. I will talk about it later. We are using self-storage for persistent data. And the bird darting daemon is running OSPF and distributing routes to other servers. But the image gets a little more complicated when we started to think about the high-level a bit and the full torrent. You can see that there is I1 and I2. But there are so many links and so many switches. And the image is inverted, so everything which is yellow is red and it means that we are missing it. It's not done this way. The most important thing we need in Kubernetes is persistent storage. Because as one OpenShift developer called us, we are the crazy guys running MySQL databases and containers. And from my point of view, NFSN really sucks because it's center point of failure and huge bottleneck. If this server goes down, then your whole storage is down. So we are using self, as I said before, and we are using a RBD plugin in Kubernetes. But it's not as easy as it may look because developers run just one proxy. It's your engine, the application and the database. But when I try to deploy it in Kubernetes, I have to put Qproxy behind these three containers. But in production, it has many problems because the app container knows static assets. So I have to remove this static assets from app container to proxy container. There are some CSS files and stuff like this. So I need another shared there which is used an empty plugin and I need this hook called exec command which is used for copying shared assets from app to proxy. In proxy as well as in app. So just one container with app get, I think, five containers in Kubernetes after deploying. And there are because we are running databases in containers, we need to backup them sometimes. And we are using Bakula and it makes even more difficult because in every port must be running Bakula file DEMON and this file DEMON must be able to communicate with the Bakula master and it make it even better, even worse. It use two connections in both ways. So I have one crazy dream that I would like to use backup in a stream way. The same way as we are using for streaming Docker logs streaming Kubernetes logs that I would like to use my SQL dump and put it somewhere in the backup stream or something like this. And I don't want to care because Bakula container is necessary to configure it and to take care of the networking. And when I was preparing the presentation that I get a really crazy idea that I can use my favorite my SQL dump and put it into the standard output which means that this my SQL dump will be saved in Kubernetes logs and it's kind of backup and I can recover it later. But I didn't try it. Maybe if you have tried you can tell me if it works or not. And we sometimes need monitor servers because if it's not running that you know. And our current status monitoring is not very good because we are monitoring just physical servers they are good in good condition and we are monitoring only the Kubernetes status because it's not easy to go into the Kubernetes and monitor each spot and the container. We are using rock solid solution based on Nagios and Observeum but it's not as easy as I would like to reconfigure this rock solid because you cannot move rock very fast. So our current solution is not able to monitor the pot which is monitoring in seconds because pots go down and up. So we are currently not monitoring the pots itself. Sorry. And another favorite thing is ATCD. If you have ever tried to run ATCD you know that's not very easy to monitor it. There are so many metrics it's not just up and down and Nagios wants free statuses. Okay, warning and critical. And ATCD has so many statuses that I currently didn't write the whole script to monitor all the metrics. Sorry. It's all. Thank you for listening. If you have any questions and we have time I can try to answer it now or just catch me outside and we can talk about the Kubernetes later. Thank you. Thank you, Tomash. And next I think it's Chunip Adra. Do you have slides? And Alexander, did you show up? Are you here? Good evening, everyone. So today morning I gave a deep dive workshop on the topic comparing the different orchestration tools for containers. So in the last slide of the GitHub page which I have used for the workshop today, you can just go back and try if you want to dig more into. So running containers on a single machine is not something what is meant for. You need to run containers in a production where you have multiple machines working in a cluster and then you should be able to run containers in production with features like replication rolling updates and all those features which you need in the real cluster or production environment. So that's where containers are going to be deployed or the real use case will be there. So why we need them? Because first of all we kind of get a lot of follow up kettles. If some application is very complex you want to make sure that it becomes very easy and you should be able to deploy them and very easily rather than waiting for some complex operation to happen. We want zero downtime, we want auto scaling and we want eventually reach to a place from one command line. So there are different tools already available which are kind of helping out with orchestration. Swam, Kubernetes, Mezos, Diego, Apache Aurora, Amazon ECS, Azure Container Service. So they can all help you out to move your containers in a cluster environment. So just want to briefly go about that what we needed to have those orchestration . So basically we need, first of all, multiple nodes to be part of cluster. So as we know that whenever you want to deploy a cluster you need definitely multiple nodes. There would be some of them would be a master, some of them would be slaves or the minions what you call them. So master can basically go and say okay now deploy some containers on some nodes and come to more detail in that. So you need somewhere to identify that in a cluster environment which is master which is slave. So you need some kind of mechanism to handle that. Like in Docker swarm you have a token. So with that token you create the cluster and in that cluster once you create the cluster then they become part of the cluster. So basically in the same environment you can have multiple swarm cluster but they will be identified with different token ID. So in the same environment we have master and slaves. Then we need a container engine to run on each one of the nodes which is going to host your applications. So we have Docker, Rocket which is going to host your run your containers on that, on top of that. Then you need a single source of truth about configuration and other details for the cluster. So it's like if you are saying that I want to run my application, then there should be five of them. There should not be either four, either three or less. So you need some kind of key value pair mechanism through which everybody can refer and say, okay, am I the right configuration what I want to be? So the key value pair helps you out with that and for other users also. So you have at CD console this kind of tools which is being available for you to manage that single source of truth. Then when you try to move to the containers with multiple machines you would want container from one machine to talk to the container on the other machine in the node. So for that you need to have some kind of over network or some kind of way to basically reached to one container to another point. So how do you do that? So it's like you have a network from a Docker network, then you have a different solution which has been built through which you can solve that problem. Then you have a scheduler to schedule the containers on the node. So basically you would like to have some kind of a scheduler which would define how my nodes, how my container will be deployed on particular nodes. So you would like to have this particular part application on the node which has SSDs. So you need some kind of mechanism to identify that. So scheduler would help you out in figuring out that if you have put some kind of constraint there that should be able to handle that. Then once you have prepared the application now containers are mortals. Container can come and go any point of time. So it comes back on machine 2. But your application should not be worrying or the client which is accessing that particular application should not be worried about that. So there should be a service discovery mechanism through which cluster should know that how I am going to change my path or basically the correct path to the correct container which came on different node. Then you need some kind of a proxy process. So let's say you are a client. You just want to connect to one endpoint and in the back end you might be running five containers. You might move to hundreds of them. So whatever would happen client should not be aware of that. Should be accessing the one endpoint and they should be able to do that. And once you have that then you would also want to have some kind of shared storage. So for example if you are a user of the other machine and if it's writing some data on the disk then the same data should be available on the other machine. So there should be no interruption in the writing as well as the data should be there what you have been writing there. So what you have some kind of shared volume like cluster FSF, flocker, they are different plugins which will help you out to maintain and move from one to another note. Data is already there because of shared storage. These are kind of broad features what we need in the different orchestration mechanism already available. Now just quickly see what the difference between them in Kubernetes, Swarm and Adbanzos. So we will just go and quickly just look at them. So Swarm uses to identify the cluster . Currently he just has a docker as an engine available to run the containers. Swarm has different key value pair plugins. So basically you can say I want you at CDConsult, ZooKeeper, BoltDB, whatever you want to use. You can just do plug and play with that. That's what we're going to do with Docker Swarm. There's a lib network which would give you different drivers but it also support different network plugins like Veeve and Calico. That's supported in Swarm. For the scheduling part, you have a couple of filters and strategies what you can follow. You can have some filters based on containers like I want to run this container only when my DB is running on this node, something like that. You can have the container in different strategies like you can spread my nodes, spread my containers across different machines or you can say just pick the one node and fill it first and then move to someone else. You can do a random . You can do some things called interlock which you have to have an extra one. Basically, Docker Swarm by default as a final, they don't provide by default our internal DNS server or service discovery mechanism other than putting the details of the containers on the ATC host file but there's work in progress through which you can have internal DNS or service discovery. So, the interlock can also work as HAProxy. You need to have HAProxy if you want to do load balancing with Swarm. And there are plugins like Rekstrae Flocker which you can use with Swarm. This is the setting in this laptop? Yeah, sure. Can you help me out someone? I'm really sorry about this. I don't know. I want to go back. We're running out of time but did Alexander or Jacob show up or we can give more time here? So, I just want to know. No, that didn't show up. We'll give you three more minutes to fight with GNOME. Not me. So, for Kubernetes, quickly, we'll just finish off. Kubernetes has master and slaves. It can support Docker right now but Rocket work is in progress. It currently has KVLupers at CD. It can have Flannel, OpenV Switch, Veeve, Calico as a plugin for the networking. And in the scheduler you would have predicates and priorities to decide where you want to build your containers as well as you can have some kind of like an OpenShift we have different kind of strategies like SSD strategy. So, basically put this particular container on a machine which has SSDs, different zone and all that. Then you have inbuilt cluster DNS server which is so you can have an add-on in the Kubernetes which can help you also in discovery . So, this is an architecture for Mesos. I'm just going to skip that for time being. So, okay. Out of time. So, you can just go and look at this up. GitHub slash NKHRE. I don't know. Yeah. Okay. Thank you. Thank you. Okay. Thanks everybody for listening and for participating. Please do rate the lightning talks and talks and we hope to see you here tomorrow. That's it for today.