 Do we have one? Apparently not. OK, we're good to go. OK, so we're going to start a little bit early. I have, like, 20 slides. So it's kind of like two minutes per slide. Try to go really fast. I hope not to disappoint anybody. There is so much of a visual content. But at the end, you'll get the idea why. So my name is Miguel Suniga. I'm part of the Cloud Platform Engineer semantic team. We're building private cloud inside of our things. And today, I'm going to talk to you about platform as a service. So as you see, platform is supposed to be on top of the IAS stuff. And there are multiple options. As you know, IT, there's multiple ways of doing the same thing over and over and over. And I decided to start playing with these ones, which is Kubernetes and Mesos. And on top of that, I put it on top of OpenStack, which got me here. So thank you, everybody, for starting and for coming over. And let's go with the agenda really quick. We're going to have how to design your platform. We're going to talk about the architecture, managing resources, how you manage containers, how HA gets set up in there, and then jump into a really quick security part, how you can define your platform services. Because there's a lot of confusion on how Docker runs things and what exactly it is. And also, how to roll out new services out there and questions. So let's go and start first. So first of all, design your platform. As you know, the best way of designing a new system is going over and figuring out what exactly it is that you have there. My apologies. What are going to be running, pretty much? What are you going to be running? Stateful applications, stateless applications, how much load, how much networking, how much this guy you are going to need. And this is really important because there's, I mean, every old platform as a service that they come over, whether it is Docker, if it is CoreOS, if it is whatever it is, is not a silver bullet. So sometimes to get a real platform as a service, you need to mix all the technologies that you have, depending on the use case that you're running into it. Also, the other thing that you need to be aware of is how secure do you need to be? Coming from Symantec, we are actually focusing a lot in security. But there's a lot of these technologies that they don't offer any type of security at all. So you need to figure out whether you abstract the security layer, probably two layers above, or if you put it under another layer, they say I'm going to do a network isolation. Or I'm going to create, like, this is my data center and I have a rack where I'm going to drop everything that is really confidentially or something like that, and we're going to run it in there. And the last one, and really important, is where you're going to be running. So everybody saw the Google keynote and what happened there. They were just jumping from one side to another. And you have to be aware of this because even though it seems to be magic, this stuff breaks. So it's not really good to just go over and say, I'm going to go ahead and just try to deploy here and deploy here and then move from here to this other cloud and then into my data center. All the technologies that we're going to talk about right now, they're not related specifically to OpenStack. You can run it on top of OpenStack, but at the same time, you can run it on top of any other cloud or even on your physical servers. So those are pretty much one of the initial steps to figure out how you're going to design the thing. The most important ones, like I mentioned, is those who are identifying workflows and applications and how secure you need to be. Everything else can be pretty much waited after it. So after you have designed your application, let's see the architecture. I'm going to show you three types of architectures. One of them is using Mesos. The other one is using Kubernetes. And the other one is using Mesos and Kubernetes. So if you see here on the diagram, the way that it works is that you have your hypervisor and you have your Mesos layer underneath. So you have a master and then you have Mesos layers. Quick question, who has an idea on what is Mesos or who has worked with Mesos? Okay. So let me just jump into really quick on what Mesos is. Mesos is a task scheduler. It was designed at the beginning to be able to increase the Hadoop speed and at the same time, you just run a task and we'll go over and grab any X amount of resources that you tell it from your cluster and execute the task. When the task is done, you pretty much go back into it and reports the output. What they have done is that they enable it, running the task as containers. So you can pass over and tell it, I'm going to go over and use this container, this Docker container, pull it on and then execute it. That means that you can have any type of application inside of your Docker container and run it as another Mesos task. Mesos uses Docker by default, so, but you can actually plug it to any other external containers that you might want to go over and use. So the way that it works is that you have your master and then you have frameworks. What are Mesos and for Mesos, a framework is a way of actually interacting with the cluster itself. One of the more popular frameworks out there is Aurora that is being used by Twitter and Marathon that is used by Mesosphere. What they do is basically just taking care of that, whatever task you tell it to, whether if it is a container or not, you're going to execute it constantly and constantly. If it dies, bring it back. Doesn't matter where it is, if it is on this node or on this node, just bring it back. So with that in mind, the way that this stuff works is that you go over and say, okay Mesos, you're running on all these VMs. I'm going to create a new container, let's say a LAMP stack. So you grab it over, you tell it execute this LAMP stack with, I don't know, 300 megabytes of RAM and two CPUs. It will go over and query across all the cluster and figure out who has resources enough to run that thing. It will spin up your container and have it running in there until you kill it or basically until it dies. If you're running Marathon, then it will basically just wait until Docker removes the container and bring it up right away. Like literally it goes over and takes, sometimes when you're loading containers into Mesos, you have to put a delay inside of the container because of how fast Mesos works. It will spin it up in less than a second if one of those containers die. So the way that it works is that you have the user interaction is that uses basically Docker network. So like any other, like if you were running Docker on your VM or your virtual box, it's pretty much the same deal. Just makes the IP table runes and all that kind of stuff and you access it through the host, in this case, through the BEM IP. So let's move on to the next one. So that's Mesos and OpenStack. Mesos runs on top of its Java-based, real cool framework that you guys, I mean, you can go and check it out. Now comes Kubernetes. Kubernetes, what is doing? Kubernetes is more like a container orchestration. I'm pretty sure you already went through a lot of talks here on Kubernetes. And this is our setup that we have in, well, I mean, pretty much any setup goes the way. You have the hypervisor, hypervisors, then you have your SDN and you have a BEM that holds scheduler and then also holds the controller itself and then you have the API. The user interacts to the API to request and say, Kubernetes, go and create me this pod or go and create me this other replica controller. And pretty much Kubernetes goes into each of the other BEMs, which is running a client named Cool Lady and says, okay, I need X, Y, or Z containers of this type or of using this image and just keeps them running. One of the differences that on Mesos, since it's more focused into executing tasks and Kubernetes is more, even though it's based on Borg, is more into like microservices. In a pod, you can have multiple containers running the same, and not actually the same thing, but the whole idea is that you have each of the pods are gonna be the couple from every other pod, except if they're a replica controller which are exactly the same. But let's say you have an application and you have your lamb stack here and then you have a JD server on the other one. You create the pod in a way that you're gonna have everything related to that lamb stack and you put it in there and then on the other one, which is gonna be the other pod for your Java application or JD, you create it and you then talk between the two of them at the API layer. That way, if something goes down or if you need to upgrade, you're upgrading it specifically on that pod and don't have to interact with anything else of your application itself. One of the problems here on running open stack is that Kubernetes requires another layer of networking. Right now, CoreS came up with Flannel, which allows you to go over and deploy the networking itself, and that's only for allowing the containers to talk to each other between one container and another on different hosts. So the way that it works is that if container D needs to access container C, it will go over, send the traffic through the Qubelet proxy, Qubelet proxy forward through Flannel, which is just another SDN on top of whatever you have on the SDN, and then send it back to container C. That's the way that it allows you to talk between each other. Qubelet proxy, besides of providing the routing for the internal network, provides also load balancing and provides also the access for the user into the containers if you want to go over and set it up as a service. I could probably talk, actually I should have do three talks instead of just one for each of these ways of doing things, but I'm running out of time, actually. Well, anyway, let's keep moving forward. And then, so we've seen already the first one, which is just executing the container, like if it was anything else, the only thing is that you have a really nice cluster that is keeping all the resources for you and it's spinning it back if it dies. You have Kubernetes and OpenStack that probably a lot of people is related to it and how it works, which works pretty well, and then you have this monster, Kubernetes, Mesos on OpenStack. And what I'm saying is that here is basically the two technologies working together, and this was a breach that the guys from Mesos were created and actually works pretty well, in which literally you're telling it, you're gonna be running the Mesos cluster and on top of Mesos, you're gonna be running Kubernetes. What exactly or why doing these stuff so complicated? Put it this way, I know that Kubernetes, some of you might know or not, it has its own HA, but it's, I mean, crashes, it crashes a lot. Trust me, I've been playing with it for a long time. On this case, you're relying on Mesos to keep everything up and running and you're relying on Kubernetes to go over and just manage your containers. So in other words, you're abstracting the manage of resources and saying Mesos, take care of where I'm gonna be running, how much CPU I'm gonna be running, how much memory I need and just keep me up and running and then you have Kubernetes on the other side going over and telling it, okay, now here is my application and this is how it's gonna be set up and this is how it needs to interconnect between the different pods. So this is not like literally running Kubernetes. It's the Kubernetes Mesos project. Grab the code from Kubernetes and modify it so it can actually talk as a framework for Mesos, meaning that you can still go over and use them, it has its own client, which is called QCFG through which the same way that you use the regular, I don't remember what's the name of the Kubernetes client, but you use it to create pods and create everything, you use this one to create the pods, container services or whatever it is. The only difference is that on the Mesos live, you have first marathon. So you go over and create, let's say you create your container that is gonna be running your Kubernetes scheduler, then another container is Kubernetes API and then another container that is running your Kubernetes controller. You go into Mesos and you tell it, marathon, execute these three containers for me, make sure that they are up and running. This is how much memory and how much disk I need and there you go, take care of them. Once Mesos brings those up, you literally can go over and just call, okay, Kubernetes, go and deploy this replica. What is happening is that on each of the Mesos lives, it will go over and Mesos will spin up your three containers and make sure that they're up and running and then go ahead and go into the rest of it. Once you talk to Kubernetes, Kubernetes talks to Mesos and says, you know what, the user is asking me to create a pod and for this pod, I need X amount of resources. Mesos go figure it out on which of the nodes I need to put it. So it goes over and puts the thing up into it. Mesos itself runs Kubernetes in as a task and then Kubernetes talks directly into the Kubernetes API and the Kubernetes controller and the scheduler to figure out what exactly needs, like how it's gonna be configured, like if they have services, if I need to go and talk to the networking or if I need to go and pull down specific images into it and then you have everything up and running. Benefits of this is that, I mean, I'm sorry, for actually accessing the stuff, you can either do it through the Qubilate proxy that basically Mesos is controlling also. At the same time you spin up your pod, you will basically spin up a Qubilate proxy on each of the VMs running. So you can access your thing through there or you can have an HIProxy if you wanna load balance through a different, a lot of containers and just access all your application through every single place. On this setup, you also need Flannel because at the end, even though it's being executed by Mesos, the same architecture of Qubilate is still running. So you have to have Flannel to be able to talk between different containers running on different servers. It's been really fun to go and deal with this one because it really gives you an overall idea on how both systems are actually working and they are able to communicate and everything. So let's say you're gonna kill something else, you gotta kill a container or a replica or you're gonna kill something that it's, I don't know, I need to destroy this service. Pretty much you go over and through the QCFG client, you talk to Mesos and Mesos literally talks to the API or actually the QFCE talks to the API, the Kubernetes API, set it up on the controller and then the controller itself talks to Mesos and tells it, you know what, now kill this stuff. Kill those containers that you have running on this physical server. So you get the best of the two worlds literally. I mean, you have the manage of resources because now if you, let's say you wanna scale, you can scale the containers at the Kubernetes and you run out of servers. You just go and add the next server, put the Mesos live into it, add it to a cluster and you have more resources. So it's a lot of, it's really, I don't wanna say this but it's kind of like exciting seeing both of them running. So managing your resources. Let's go into a little bit here. So what exactly is gonna be managing your resources? Like I said, Mesos taking care of these things. Each of the Mesos lives goes over and reports like I have X amount of disk, memory and CPU available for any X, Y or Z tasks and I have all these other tasks running. From the Mesos point of view, you don't have anything else than tasks. It doesn't matter if it is like a Docker container, if it is a script running or if it is like that path running. For Mesos, it's just a task. All the Kubernetes components, you can run them also on the BM itself. Let's say you don't wanna use marathon. You can run them inside of another BM and just tell it, okay, here is gonna be my Kubernetes API, here is gonna be my Kubernetes scheduler and here is gonna be the controller. I mean, not sorry, the scheduler, the API and the other section, I'm kind of nervous right now. Anyway, but if those die, you have to go back and spin them up manually, which includes like, okay, let's figure out what is gonna be supervising the fact that I have my Kubernetes setup running all the time. But whether you deploy Aurora or you deploy marathon, you put out that inside of a container and you have to don't worry anymore. If that thing dies, like if the Kubernetes API die or if the whole node dies, if the whole node dies, Mesos picks it up and goes over and says, I need X amount of resources, spin up Kubernetes API again. If the Kubernetes Docker container dies, it will just go ahead and try to respind it on the same MesoSlave or actually go over and look for all of the slave that has enough resources for it. So it's really good to have everything under Mesos control because you would have to, I mean, you don't worry about having running and everything. So the pods and the containers, whether if it is just a single container or a whole pod or multiple pods, all of those are done as Mesos tasks. The good part of this is that you can go over and let's say you want to graph stats. You can graph stats at the Kubernetes layer, which will give you like specific stats of the container itself and how it's running. But let's say you wanna have stats on how it's running on the physical layer. Kubernetes came up with Seattle Visor for it. It will give you like, okay, I have this task running and it's consuming X amount of time. Forget about it. You'll have Mesos. Mesos reports like how much CPU, how much memory and how much space is using each of the tasks and how much actually, since when they started, when they stop, and if they have any problems at all. The other thing that I'm gonna go over into it is how Mesos manages the whole VMs. One of the things that are really another plus, put it this way, is that you can run Mesos on a mixed environment. You can have your Mesos masters in physical servers and then just go over and tell it I have this bunch of VMs in a cloud and use them as slaves. You can even define sections that it's gonna go over and tell the Mesos server, okay, this is the cloud for X, Y, or Z loads, just deploy here. These are physical servers for X, all type of tasks and loads, deploy here. So it doesn't matter where you're running, you can just go ahead and start the Mesos daemon and as long as you can connect back to ZooKeeper, which is the only bad thing that I see right now. You can go over and tell it run anywhere and just make sure that you're managing each of the different resources the way that you should do it. So now let's jump into a little bit more of how we're managing containers. We already saw the physical part, the VM part, the resource part. Let's jump into this stuff. This is probably more like everybody knows how it works. Kubernetes manages all the containers. You define your things into pods, you can run a container only, you can run a service and a replica. Who from here is actually running Kubernetes already? Nobody, nobody running Kubernetes? One, okay. Let me just go into a little bit into how Kubernetes runs. What Kubernetes, each of these thing of pod replica service is pretty much kind of like a unit. A pod is a collection of containers that they're like really tight couple. So you can have one or more containers running in a pod and it will be, you don't see the pod as, you can see it as a group of same type of containers and they can share data and they're gonna have something more in common. The replica is just saying, okay, this is my HA just in case that pod dies. Kubernetes make sure that you have another one exactly like that one, prepare just for HA. At the same time, it can run them at the three. So the replicas work besides of HA high availability. It basically works as load balancing also because it will give you like, this is my pod and you can tell it, the replica has to have at least two. So it will create your original and then two more. If one dies, it will basically keep adding the other one and recreate the pod and recreate the pod. And this is all everything internally. Now how you actually jump into the access of the pods as you know, getting into Docker containers kind of difficult from the network, but Kubernetes takes care of that with services. Basically what a service is, is just an entry point. You can go around and tell it, okay, this is gonna be my service that is gonna be running on XWARC pod and it's gonna be talking to, I mean, sorry, this is my service that is gonna be running on XWARC pod and it will send all the traffic to these pods that you have here. So the Kubernetes service, as you saw in the previous, let me just go back really quick. Where are we? Where are we? Here. The Kubernetes service, all of them are defined inside of what it says KM proxy. The Kubernetes proxy is the one actually doing all that kind of stuff. And it does not only provides load balancing and HA, it also provides the routing between the Flannel network, which is down there. And it also, I mean, at the beginning, was also providing DNS. Now they basically split it up and they manage DNS as separate container like another piece in there. And you can go over and tell it instead of learning like IPs, you go and say, okay, I need to jump into my cool container and I need to talk now to my cool container two and I need to talk my cool container three. So Kubernetes rather, I mean, doesn't force you to have static things into each of the containers. It would rather go over and have the Sky DNS, which is the thing that they're using for actually managing the DNS on Kubernetes and allows you to go and define names and doesn't matter which is the IP. Kubernetes takes care of saying, okay, I have this IP into here and this is the IP of the Flannel and of the IP of the Docker container in Flannel. I'm gonna forward the traffic order. So going fast forward. Where are we? Okay, there we go. So I already talked about a little bit on how they interact with each of these things. The way that you manage this stuff is through the QCFC client that the process that the Kubernetes methods client has, which is basically just a wrapper to the original QCL to the original Q from client from Kubernetes. So any flag that you pass over in Kubernetes is also valid on this other framework. It doesn't have to like, oh, I need to change everything. No, literally the only thing that changes is just the name. It's really straightforward. And the other thing is that I'm already said about how the replicas work and how the services work and how the pods work. So I think we're good on that. Let me see what else. I think I'm still missing something. No, let me just move forward and probably I'm gonna come back. So high availability, how it's done. Here's just a explanation on between the three different frameworks, which one provides what. So if you wanna reach high availability through Mesos, you gotta use something like Aurora or Marathon that basically keeps your containers up and running. For Mesos, it's just a task. Whether it finishes properly or crashes in the middle for Mesos is done and reports back saying, I'm done, I'm done with this one. The only thing is that if you go over with Mesos only and Marathon and any other things, you need to have an HIProxy load balancer in front of it or something similar to it to be able to just like load balance across all of them. All of that is gonna be just accessing each of the different hosts. Kubernetes has its own HA components which I'm gonna be out of scope but I already explained them how they work, which is the Kubernetes proxy and the replicas and the controllers. And the other part is the Kubernetes Mesos which has the HA for all the components of Kubernetes up and running and you manage those through Marathon and then you have the load balancing that you can be like either through HA proxy or use the Kubernetes proxy itself. Now jumping into a little bit more crazy stuff like security. So now on security, how are we getting this stuff? This is the difficult part because one of the problems is that none of these are multi-tenant, none of these provide like any, Mesos provide some types of ACLs and that's pretty much it. Kubernetes is also in the latest B1 beta 3 API. It's actually putting, I think they're putting ACLs also but it's not really easy to go over. So the way that we're managing it is that instead of saying we have just a huge cluster since we're basically running on top of OpenStack we're telling, okay, for XORC user, create this cluster, for XOR user, create this R cluster, and for this R user, create this R cluster. So you can allow the users to spin up different clusters depending on what they want and you manage the security outside of it and abstract it. And it's been working well. I mean, it's not something that is gonna go and say, okay, everybody is like completely secure from like I can go and talk to it but the thing is that we don't have, we don't give access to users to use the QB lady client to go and spin up things. We literally force them into a portal and then the portal does everything for us in the backend. There are things that we were running Mesos with SC Linux, which was a big deal. I mean, sorry, Docker with SC Linux which was a big deal before and we're using, I mean, it's Railways but we're using CentOS 7 which actually works pretty well for this one. The other security concern and this is more for helping out the network isolation is enable IP tables dropped by default. Please people enable IP tables dropped by default. All this stuff runs fine but pretty much you can attack anything that you have on Kubernetes on whatever it is because IP tables gets set up by Kubernetes and even though they actually put in different specific rules for each of them you still have everything on like accept everything for me. So you could easily go over and do like man in the middle attacks. You can easily go over and do all our type of stuff on the denial of service and distributed denial of services and bring down everything into one simple touch. Now, I'm still on time. Okay, how are we gonna design these platform services? Okay, to design the stuff. This is the second part that I was gonna go. Already talked, so I already explained how the framework works. Now how are you gonna use it? So think of this as cattle. Don't think it as like, okay, it's my container. I need to have my container. The containers are ephemeral. They die. You cannot bring them back. Make sure that you have a container and you have a volume where you're actually putting the data into it. Second, containers are not VMs. They are not condensed VMs. They are processes. That's the reason why this mix works pretty well. Mesos is a process scheduler. The containers are processes. Don't think that, okay, I'm gonna have my container here and I'm gonna run all these bunch of things into it. Just is a process. You run HTTP, you run Mongo, you run my SQL. That's how it's this stuff designed. I mean, containers are nothing new. They've been up in there since Cherut. Cherut was the initial container and that's been run since, I think, before I was born. So your thing. VMs or containers. This is one of the things that I learned from the Cloud Foundry guys. Not everything is for a container. Some things you have to keep them on VMs and physical servers. If you go into everything trying to put as container, you have to move everything into a stateless type of architecture. Sometimes it doesn't work. It will just not work. Sometimes you have some custom application that is actually doing really typed streaming data into it and you have TCP handshakes that even if one is broken, it will basically restart the actual session. So you have to make sure that you have a pretty good idea on what you're doing and this is the reason why I mentioned the design. Figure out what you're running at the beginning because once you get to this place, you're gonna know, okay, I need VMs for this one and I need containers for this stuff. Or I can rearchitect my application to instead of using VMs and everything into it, just put a bunch of processes in a single pod and then just go ahead and run it inside of Kubernetes. If you have Marathon or Aurora or whatever you're a fan of, use it. Why? Because there's, I mean, it will help you a lot. Like the cluster that it's running right now, I don't even have to monitor it. I don't even have to spin up a ticket for it. Goes down, it brings back it automatically and that's the whole purpose of these frameworks. The other one, security parts, Docker, who here is using Docker Hub? Anybody? Nobody? Nobody's using Docker Hub? Like Docker Registry? Okay, a few of them. Use a private Docker. You can go over and tell Kubernetes instead of going to Docker Hub and pull down everything from the internet, use a private Docker thing. Don't go into, pull down whatever you have over there. And this is just pretty much basic stuff, you know? So microservices, what is that? Anybody has any microservices idea? No? A few of them. Anybody has done SOA architecture in the past? Okay, more, more. So you already have done microservices. It's the same deal. It is the same deal. The only thing is that it's just like rebranded and newer and with all the learning and experience that we have from SOA architecture. But at the end it's pretty much the same idea. Kubernetes is designed to do microservices. Like I mentioned before, the pod is not just one container, it's a collection of containers. And the whole idea is that you have multiple pods talking to other pods based on API. This allows you to go and say, I need to, I don't know, do an upgrade on some application that I have in a container, you can bring it down and bring it up. You don't have to interdependencies. You don't have to, pretty much allows you to upgrade easily because you're upgrading per component. And the other one, the one that I mentioned, is your pass. You know what you're putting in there, but don't expect that it's gonna solve the world. It's not gonna solve the world at all. I mean, you can have multiple things. If you're willing to go ahead and say, okay, I'm gonna use Kubernetes, I'm gonna use Chorus with Lead, I'm gonna use Heat, go ahead and do it. I mean, probably you're executing Heat to bring up the messes containers and the messes server and then you're inside of the, or put it this way. You have Heat that's been up to the BN running Chorus and then you deploy messes into it. You have pretty much a lot of the things that you can go around and do. And to be honest, I mean, I haven't seen something that is like silver bullet for pass applications because sometimes you're gonna go and say, okay, I have a really cool application but it runs on Windows. What are you gonna do? Now, how you roll out new services? Rolling out new services, like I mentioned it, use your private Docker. It's not really difficult to set up. You can just literally call pull down the Docker registry and just Docker run, Docker registry. And you have your own private Docker registry set up. Create a UI of abstraction to the users. I love simple things. Who doesn't love simple things? Why going over and telling the users or and don't take it because I'm also part, I do architect a lot and that's the reason why I like simple things. Why do I need to learn like the CLI? Just click a button. Although it looks like Windowsy but it's easier sometimes to just go and click a button and have everything into it. Manage the clusters with marathon. We'll already talk about it. Where containers are processes and one of the other things, use CI CD. CI CD will allow you to go over and tell it, build my new Docker container, put it in place. Okay, I have my new Docker container. Go over and tell Kubernetes deploy this new replica using this new image. And all this stuff you have to use it in the backend. Don't think that the users are gonna be sometimes knowledgeable enough to figure out how to do all this stuff. That's the whole UI abstraction it is for. And two of the things that you can go and use is those two projects that we have there. Continuous is like the CI pipeline that we open-source it. And Strategos is pretty much the UI that I'm talking about. If you wanna go ahead and give it a shot, feel free. Here it's pretty much an example of how we're rolling out services. We have the users that goes into Continuous. Basically, that's kind of like a reverse engineer Travis CI that we have. They don't even have to worry about knowing CI itself. You just upload your code, your Docker file, or whatever it is, put a configuration file and tell it execute Docker run. By the end of it, the CI will basically, you will have it inside of your Docker registry, which is available to your cluster itself. And then you can either from CI itself call the UI and tell it go ahead and deploy this new stuff. Or once the UI basically refreshes with the new image from the Docker from the Docker registry, the user can go over and define, okay, I have my downtown, I'm gonna go over and tell it deploy this stuff. And this, everything is just like done through API. They don't have to worry about anything. It's really simple to use. We have to use it to do crazy stuff. Well, not really crazy stuff. Some stuff, like for instance, we need it, and this is just one of the projects that we're having there. We need it dashboard. That was gonna allow the users to create dashboards on the fly. And literally go ahead and say, okay, I have my dashboard and I have all these metrics and I'm gonna need to go and use it. The way that we deploy it is that we grab dashing. Dashing is a new, it's a dashboard framework that Shopify created. The only problem is that it's a Rails, it's a rack application, written in Ruby. So we're scaling it and doing all that kind of stuff. You need to go into the server and modify the files directly and then reload the application itself. The way that we're doing it with the framework is that we just give the users, create your new dashboard, put the name into it. We give them the list of widgets that they can see one after another and just select, okay, I want a meter, I want a graph, and I want something else. And then, just a huge text area where they copy, paste, whether if it is Java, Python, C, whatever they want to pull down metrics from whatever they want. And we go over and drop it into a container and gets executed into the framework. They don't have to know anything else. In less than probably two minutes, the dashboard is already pulling the data and pretty much putting new stuff into it. So as you can see, you can go over and create really different stuff, really amazing stuff that you can think, okay, not even a single way I can automate this process. But now that you can go over and say, I need a new, I have a new container, which basically is a process itself. You can scale it as much as you want. You don't have to say, okay, I have only my server here running and I'm gonna do something else. On the same framework, not only the dashboard service is running, it's basically running also the CI application. Why? Because Jenkins has a plugin for Mesos, which literally you go over and tells the Jenkins master, talk to Mesos, figure out where you have resources, and send this job, execute it, send it back and report it to me. So it gives you a lot of functionality with one single cluster itself. And you can go ahead and do really amazing stuff with it. And I still have four minutes left. Okay. Yeah, almost. So with that in mind, there are some links here where you can go over and put it in there. You can pretty much just grab the things out of the semantic GitHub account. Skrotevos is gonna be uploaded on June 30. Kubernetes, IO, Mesos, and Marathon, which are basically the tools that are here. You can go over into it, or you can just wait until the first two are gonna be available. And you can just one click bot and deploy it anywhere that you want. And with that said, I'm gonna leave the rest four questions, which I think is like four minutes or something. Any questions? Oh, do we have a microphone, just that one? Okay. How long? How far? Okay, so the question was how well Mesos handles the fact that you lose a VM or a physical server? It works really well. I've been testing it out and literally killing, like, destroying VMs on top of OpenStack. And they will go ahead and Mesos will detect that the job is no longer running because the Mesos slave basically reports, I think, every second to the Mesos server, or every half a second. I think you can configure it how fast you want it to report. You have to take into account that every time it reports back, it creates a log. So you have to consider it, to put all those logs into either DevNul or something else to not to just run out of memory in the server itself. But it does, it works really fast. Like I mentioned, there are some times where things go down and I don't even care about it because it's been managed by it. The whole framework, like for instance, the CI framework that is running on this stuff is probably around 70 servers. I'm able to support up to losing 40% of them. And the application won't go down. As long as not the same components go down, I mean, it will just skip them up and running right away. Any other question? No? Okay. Well, appreciate it. Thank you very much for the time.