 So, hello everyone, thanks for voting and coming in this session. I will be presenting on Deploying Containerized Open SAC, Challenges and Tools Comparison. One thing I want to mention that, Zafri from 99cloud.net and Surya from NEC Technologies India were not able to join this presentation, so I will be presenting on their behalf. My name is Javish Kothari and I am a senior software engineer at NEC and my areas of work include storage and cloud, currently working in project courier and I work with container technologies like Docker and Kubernetes as a member. You can catch me on IRC at Genonymous and GitHub link is pasted here with LinkedIn ID, so moving ahead. So, the idea of this presentation is deployment tools, we will be covering few deployment tools. Then we will move ahead to challenges and deployment tools, the traditional deployment tools having challenges and why we are moving towards containerized ones. And then we will move towards containerized deployment tools and we will discuss what are the options available in the market and how they differ. In this presentation, we have a comparison between only three tools because comparing all of them were not possible at one time and then we will summarize all of them, so deployment tools. We all know that deploying OpenSack is a tedious task and deploying it manually is not possible, so we have automation tools for that which handles most of our work and deploying OpenSack is fairly a complex task as was told in the keynote session early in the morning, so we will see, so this is the operations guy, these are the some of the tools available in the market. These are not all deployment tools, tools vary in the market by provisioning tools which provision the cluster before deployment. Then we have configuration management tools and then we have configuration management tools and then we have some deployment tools and then life cycle management tools, so deployment tools could be categorized and all of these categories. But we will talk specifically about deployment tools, here we have listed blazar as a configuration tool, Kubernetes life cycle management tool, COLA, DevSack, this is the ecosystem of a typical tools available in the market and this is the operations guy, really confused. So we will be specifically talking about OpenSack deployments, by OpenSack deployments we mean that OpenSack is not a single application, it consists of a number of projects, mostly deployed projects are eight or nine which include Nova, Neutron, Cinder, Keystone, Glance, Swift and few others. So by OpenSack deployment we specifically mean deploying OpenSack and we all know that OpenSack deployments are fairly complex because managing the configurations and services individually is a tedious task and apart from that managing these deployments are even more complex because we have to handle versions, upgrades, interdependencies, these are even more complex terms. So deployment tools vary by vendors like Canonical provides Juju, Mirantis provides Fuel, Dell, Crobar and Red Hat provides PaxSack or Dripolo, Director also and there are a few others but these all are deployment tools which may be doing the same thing but they vary by vendors, they deploy on different ecosystems or distributions. So deployment options usually consist of deploying on a bare metal node or a VM and now there is a third option we usually heard in this summit that is containers. So this is the OpenSack survey of deployment or configuration tools fairly used in the market. So we can see there are two points to note in this first is we have Ansible and Puppet at the top and earlier it was earlier this survey showed that Puppet was at the top but Ansible has shown a significant growth this cycle followed by Fuel, Juju, PaxSack and few others. Another point to note is that there are many containerized tools at the bottom of the chain like Ansible LXC, Kola Ansible, Kola Kubernetes, Red Hats, Dripolo Director and a few more. What we can infer from the survey is these tools have gradually come in the market and they are rising pretty quickly and in the releases further we will be seeing them even in the top list. So what are the challenges in traditional deployments? By traditional I mean non-containerized one to be specific. What are the problems with existing deployments? We can ask like why are we moving towards containerized deployments? What are the problems with existing ones? So we have heard in the keynote that there are a lot of problems with OpenSack adoption and what are the challenges. I will not list them all then. So I will be listing a few which are left like first is commonly used as difficulty related to deployments. We all know that it is a bundle of all the services and configurations and managing each one of them is a very difficult job for operator. And second is the ongoing lifecycle management. By that I mean that upgrading in dependencies is a typical term which in case of OpenSack deployment leads to many outages. So once these problems are solved we can see a further adoption in OpenSack. So apart from that the areas which require further enhancements are first is installation. So as the installation proceeds we see that there are many conflicts between various services in the OpenSack ecosystem because there are a lot of dependencies and we can see that many dependencies could conflict each other between sharing with services sharing these dependencies. So I will be showing them with the help of a diagram and for the slides. Second one is deployment is a huge task and these are prone to failures because even if the cluster is deployed successfully no one can assure that it will be working. So that's also prone to failures. More deployment time usually is a big challenge for all the operators because OpenSack usually takes a lot of time for setting up all the configurations and being deployed on the production node is a huge task. In the installations I talked about conflicts between service configurations. Now I will be showing how these things with the help of a diagram. Like suppose we have two services say entity service and a network service and they share a common dependency say package C which has configuration of ABC.con and they share a port number say 1, 2, 3 these lie on operating system. So when these two services try to share the package and they if network service wants to have some different configuration of that package and if the network service changes that configuration it would lead to both of these services failure. So the network service tried to change ABC.con and if the service is restarted then the entity service would also fail which leads to non-working of both of these services. Another big challenge is upgrades in OpenSack. So we have heard a lot about upgrades in OpenSack and their failures because upgrades is not an easy task. So life cycle could use a lot of attention like doing in place upgrades is a risky business for many big market players because they are already working on non-evalu versions which are like Metaca and Liberty and jumping from one version to another is a very difficult task and in this case we are talking about jumping to the current version. So in that case it becomes even more difficult which generally results in an outage. By outage I mean that there are many unexpected failures which could happen during the upgrades. So getting tenants to move from to a new deployment is a big task and we cannot we cannot justify that even after all the things are successful we can migrate to a new version without having any problems and if the problem occurs we might end running two versions of the same service and in that case that's a big loss for all the operators. So another big challenge is when during upgrades there are a lot of dependencies and there are a lot of bugs faced during that time and solving these are time consuming for everyone and apart from that rolling upgrade with zero downtime is also a challenge for upgrades in OpenSack. So what's the solution? We generally see that many organizations are moving towards container. We saw a keynote session in which Verizon told how they containerized the OpenSack services and put it inside the containers. Now we'll be seeing how the containerization helps. So this is a containerized application. This is a typical containerized application it looks like. So there is an app which is running inside a container. Container might be anything, you're saying any container runtime, either it might be Docker, RKT or anything and it will be running on a host server, host OS and there will be a hardware setup. So it will be having its own dependencies in this case libraries. So we will be specifically talking about control pane containerization because containerized application of compute nodes is another set of topic which have a lot of solutions but we will be specifically talking about control pane in this case. So multi-version operability is possible in containerized case like two versions of the same service could easily be run on the same system because each version have its own dependencies packed inside its own containers. So second is running OpenSack on containers, you get the benefit of consistent deployments which means that the test environment, development environment all are the same whether through a span of all the nodes. So consistency is also a big point and all our containerization apps which must be consistent all through the nodes. Apart from that, the robust application life cycle management of containers also helps a lot in managing the OpenSack services. So apart from that physical machines and virtual machines we have all heard that OpenSack deployment is possible on these and there is a new solution to that is containers. This might run inside VMs or on bare metals and they can run on apart from VMs also on bare metals directly. So this diagram only shows how we can visualize containerized environment. Like if there are two services like Nova and Cinder using their libraries on host machines and there is a hardware. So traditional deployment is like Nova and Cinder on the machines and moving towards containerization we mean that containerization of Nova and Cinder because in OpenSack we get the benefit of microservices by that I mean that each OpenSack service is a modular component. So Nova's component is designed to be modular as Cinder and as well as all the other components. So containers take the benefit of modular architecture of OpenSack and they can easily be packed inside the containers. So this gives the benefit of each service having its own library and interdependencies of these are not there as was in previous ones. So how containerization helps in this case. So containers are easy to deploy and these are portable across the systems as we have gone through it. So it isolates the application on a host operating system. By that I mean that all the interdependencies and services have their own environment and they are running standalone regardless of what the other services are running. They will be interacting through the APS but they do not share the APS dependencies. It containers also gives them faster boot times which is different from VMs which takes the resources and boots and takes time to boot and there are easy upgrades from one containerized service to another. Another point is environmental consistency that means that environment is the same on development machines or production machines or testing machines. So we get the consistent environment throughout. So we have discussed that OpenSack components which share the common libraries gives a benefit of upgrading each of the services regardless of each other or upgrading a version of one regardless of other. So the solution to the problem earlier we faced that the dependencies were on the same host system and they were used by both services could be solved through two things like either you can run each services in the virtual machines or you can use lightweight containers but each have its own pros and cons. In this case containers win the battle because processes and network isolation are provided by containers and in this case VMs have its own disadvantages and advantages but in this case containers are lightweight it is easy to start up. These have smaller images regardless of the operating system it is working on because they share the same Linux kernel. In virtual machines they have its own kernel loaded. So in virtual machines these are heavy weight. These take up own chunks of resources on the hardware and they have their own process and network isolation but they are heavy weight. So usually operators go with containers in these cases. So apart from that benefits of containerizing OpenSack services are they are easily upgradable. So it is very easy to upgrade from one container version to another. They will just check the previous state and move to the next final state which are provided by many COEs like Kubernetes and since the entire software stack is held in containers interdependencies as we talked about do not pose any problems now and there are easy roll backs with zero downtime. There is a mix and max version of services which could run on the same host and the container images are immutable. So by that I mean that container images are same regardless of the system on which they are running. So moving towards containerized deployments. So what does a typical containerized deployment do? We all know that there are many tools available in the market to containerize OpenSack services but a typical containerized tool uses an image repository. This could be Kola or its own and they use a configuration management tool like Ansible Puppet or they could use their own to generate configs and deploy these containers images onto the host systems. So to visualize this we can think of like this. We have container registries which have container images stored in them which either could be built at the same time or could be stored earlier. Like in the case of triple O it uses Kola's images. So Kola has already built up images where triple O could directly use it or you can configure to build at the same time when you want to deploy from the source code. So we have container repositories at one side and we have a deployment tool or a framework which uses these container repositories to launch an instance and run a container. So in the node running these containerized services we use container runtime which could be Docker, RKT or any other depending on the deployment tool which want to use it. So among the container runtimes Docker is the most used so followed by LXC and LXGs. So we will be moving towards differences and comparing container based deployment tools and listing down the container deployment tools currently available in the market. But before we do that we will go through three basic tools that will help us to understand these containerized tools better. Like first is Ansible. Many of the folks would be already familiar with Ansible. Ansible provides an agentless configuration and deployment tool. This is the standard deployment model in many projects for deployments and this has a flat deployment model currently it does not handle life cycle management. There is no placement orchestration in them. It could be used with the help of Ansible playbooks or Ansible tower and management through system D is done in Ansible. It uses a typical Jinja format or YML format to build images or deploy containers or same as a puppet like puppet is follows client server model. It has it could manage life cycles because it has agent running on the nodes. Apart from that it is open source and mostly triple O uses puppet. Apart from that because the puppet follows client server model it communicates with clients regularly running on each of the nodes on which open sack deployments are needed. So it handles life cycle management pretty well but it changes the host environment as an agent is running on the nodes. We have Kubernetes. We have heard a lot about Kubernetes these days. So Kubernetes is an open source tool which help to deploy the containerized applications and it uses self-healing mechanisms and various techniques to schedule and deploy containers on various nodes very effectively. So it provides HA but work on HA is still going on in Kubernetes community. It has like load balancing is a main feature of Kubernetes. It has easy rollbacks upgrades. It has some different concepts of pod services and all but we will not cover that in detail. So why were we talking about all these three tools? We will categorize all the container deployment tools in the current scenario according to these three tools. So this is a categorization we did based on the framework they were using like either they use Kubernetes or they use Ansible or they use Puppet. We divided them into these three categories like deployment tools include stack entities here so it uses Kubernetes as its framework. Colour Kubernetes in this case uses Kubernetes also and same as open sack Helminfuel. We can see that Colour uses Ansible and it uses Colour image repository. Open Sack Helmin is its own Helm repo which internally uses Colour only and Tripolo uses heat templates to manage containers but they are planning to move to Kubernetes. So the main focus to categorize every tool in these three categories is every tool they uses internally has its own configuration like Ansible is agentless, Puppet is with agent, Kubernetes supports many network drivers. So all these features are inherited directly in the deployment tools which uses them. So is that it? Like if they are using directly the frameworks, so what's the difference? So apart from the functionalities these frameworks already support we have categorized and listed some differences which are different from the frameworks or the frameworks that he uses. We have listed down the features and we have compared three projects, Colour Ansible, Colour Kubernetes and Open Sack Helmin. We have used only these three projects because based on the commit count this cycle because these were at the top in this cycle. So as we can see that continuous integration is supported by Colour Ansible and Colour Kubernetes. Open Sack Helmin has a work in progress for us here. Flexibility in the Colour Kubernetes case, Kubernetes operators could be also detached which means that Kubernetes operator could be used or manual configuration could be used in that case. Quay support is not there currently in all these three projects. Helmin directly comes from the underlying framework they use. So Colour Kubernetes and Open Sack Helmin uses CNI support. CNI is just a networking driver for Kubernetes in which we can implement our own CNI. And like in Colour Kubernetes and Open Sack Helmin it uses CNI support, CNI drivers could be Caligo, Flannel or there is a new project called Courier in this case. Colour Ansible uses host network which means it uses directly the host network on which it is deployed. The deployment time of all these three projects is nearly 20 minutes around 16 to 20 minutes. Design approach, the Colour Kubernetes uses layered approach which means that each level is separated and Colour Ansible follows containerize everything strategy in that it containerize the services as a whole. Hardware support is listed like 2GB RAM, 2NIC cards and 40GB hard disk. In Open Sack Helmin it states that 16GB RAM is required and 2NIC cards with 32GB HDD. So apart from that these are the platform supports of all these three projects. First is the bare metal support of these ones. So all these support bare metal provisioning. Apart from that Open Sack Helmin provides Open 2 support currently and Colour Kubernetes has work going on for other distros and CentOS it supports and Open 2 support is in progress. Same with Colour Ansible many distros are supported as listed. So maintenance difficulty is medium in this cases in Colour Ansible case because it uses Ansible to deploy and configure cluster. But in cases of Colour Kubernetes and Open Sack Helmin it uses Kubernetes. So it automatically handles the life cycle management part and uses Kubernetes to take the load off from the operator. So Stability is Colour Ansible is mostly stable, Colour Kubernetes is a work in progress. But few of the results have shown that they are used in production also. Open Sack Helmin is not yet stable but it was used by AT&T before. But there is a lot of work going on in this community also. So we can see that active contribution shows that 206 commits were done in the previous release 125 in Colour Kubernetes and Open Sack Helmin it is 136 commits. So this is the majority like production it is production ready. And Open Sack Helmin is in development phase, Colour Kubernetes is also in heavy development right now but few results have shown that production ready in production some people are also using that. So latest stable releases are Colour Ansible is Okata and other projects have still yet to come. So these have associations listed with each of these projects. So Colour Ansible as we stated has Ansible as its framework and it uses Colour's images and Colour Kubernetes uses Kubernetes as its framework and Colour's images which are generated through Colour Ansible templates. And Open Sack Helmin uses Takenities and Kubernetes and Colour. Does that mean container solves everything? So looking at the deployments we can see that most of the deployments were usually done in 2015, 16 and 17 is the starting. So 16 and 15 usually have Metaka and Liberty running on them. So most of the production workloads are already using Metaka or Liberty running on them. So this was a sample data taken out of 482 deployments and it showed that 82% on full operational use as per now and 54% clouds launched in 2016-17 are in production. So what does that mean? It means that Liberty and Metaka are nearly end of life followed by all the previous versions used. So the question is like most of the operators are usually using previous releases which are non-containerized. So moving directly to containerized ones is a big challenge which needs a focus and a lot of things are done to devise a way to move on to containerized ones from non-containerized deployments. There are a lot of limitations of containerization also. It doesn't mean that containerization solves anything. There is also a leverage like which one to choose, either VM-based deployments or containerized ones. It depends on the requirement of the operator, either security is required or not. And in cases of VMs or containers and there are many other options that need to be taken into account before choosing a suitable option. So also containerization makes architecture a little bit more complicated as it introduces a layer in between the infrastructure and the services. To summarize, we have seen that Ansible and Kubernetes have shown a lot of growth in the recent years followed with Docker as a major market players emerging very rapidly. So we guess that using all of these technologies in a single tool could be a good option for the operators. Choosing a tool that manages your deployments doesn't mean it has to be one among them. It totally depends on the use cases and careful analysis of the requirements to choose a suitable tool. So in Kubernetes OpenSack, there have been some gray areas in which there is work required like projects like Courier and Fuxi are solving these gaps. So these are exposing the power of cloud containers and using it to integrate with it is trying to bridge the gap between containers and OpenSack, Courier is a project for networking drivers, which brings networking of Neutron to containers. And StorageDriver from Cinder is used by Project Fuxi, so thank you. For any queries, you can reach out to us at JAnonymous, or Zephyr4L, or SP Surya, or you can reach me offline. Is that okay? Thank you very much.