 Hi, everybody. Welcome here. We are Fernando and Alberto, engineers from the innovation team at BBA Bank in Spain. And we are here to share with you our experiences with OpenStack in the last three or four years. So we would like to start with a simple question. It's like how many of you are currently using an OpenStack cloud in production? Quite a few, yes. And how many of you have already tried to upgrade that live production cloud already? Okay, so for the rest of you, that's what it looks like when planning it. Yeah, really? But yes, that's more like the reality. A time-consuming process with lots of things to tune up, to migrate and to deploy with an even better overall plan. So we agree that an immigration process is a painful process, but we cannot imagine our worst-case scenario that the one that involves the whole infrastructure. Let's talk about the infrastructure that brought us in past summits. We identified three main pain points in our old deployment. First of all, we had to duplicate tools for doing the same task in the different layers. We have also a very slow incorporation of new services, new OpenStack services. And we faced a complex architecture that we knew for sure that it could be simplified. We said in the second point that incorporating new services was quite slow. Primarily because only a subset of OpenStack services was available by most OpenStack distributions out there. And the flexibility in the reference architecture is more limited than what we expected. So we were facing questions like how could we add dynamically new components of a given OpenStack service in a runtime automatically. We just couldn't. Or how could we deploy and automate the network layer if we don't manage the infrastructure. So we just couldn't. So we found out that we were asking these questions over and over again. And all of this constrained the innovation process within Anayas. Some of these constraints are due to the bureaucracy. But most of them are because of the architecture that we designed three years ago. So what if we could set up an environment where the upgrades were easily deployed in a matter of minutes instead of months. What if we could elastically scale out our platform dynamically to respond to the demand smoothly in the same way as our hostess applications do. And what if we at the same time we could simplify the architecture so we could get right of legacy applications designed for static environments. From that moment that become our highest mission. And the innovation teams were always trying new application was to do the things in a better way. But we have a real special mantra here because in the team we need to extensively test any new application and the old one especially if we want to replace it. The idea of this table is to summarize the current deployment we have with the new approach that we were trying to test. Let me resume this in only three main points. First of all we move from virtual machines to containers as the main component for any service. Consequently we get right of virtual machine management tools. And third we leverage the power of the SDN solutions to the infrastructure layer also. We were quite satisfied with all the great open stack services that you know. But it wasn't fully orchestrated because we deployed in an automatic way our open stack services but we couldn't control the whole life cycle. So we couldn't escalate it or mainly control the life cycle. It was reproducible also thanks to the infrastructure that we have and the approach. The infrastructure is a code approach that we have from the beginning. But the hardware and networking layer was another history. It is more difficult to orchestrate that as you know. And it was also somehow portable because everything was in our internal grid repo. But with the new challenge in our infrastructure. We use new technologies to improve the platform to reduce the time to get the hardware ready. First of all because just after the hardware provider gets everything connected. We are able to automate everything from the first moment the provider gets the hardware to have this ready as a new rancher host. Secondly using Kubernetes and open stack based on containers. We solved the problem with the scalability and the portability and of course the immigration process between releases. And also we can collaborate directly with the open source community so we can share our code with all the benefits given all what we done to the rest of the world. Let's start from the beginning. We are going to explain to you our full stack deployment. And we are going to start from bottom to the top with the hardware deployment. Some people say that a huge amount of time is spent during data center duties like cabling, configuring the hardware and so on. We think that's nothing further from the truth. Because if you don't have a real bias, if you don't deploy your variable infrastructure automatically. Because you won't be able to respond to your customers needs. So we did the infrastructure. We had lots of manual steps when deploying the hardware. And now we use two tools called Pixicore and Waitron to automate this process. They are very simple tools but they leverage us to deploy everything in a matter of minutes in parallel. And as you will see later, it's really easy to do it. Everything is templatized in a ginger like language. And as I said, this process is launching parallel so we can deploy as many host as we want in as less as six minutes. The same time from one to any servers. It only depends of the scalability of the word server we use to deploy the party system images. So now Alberto is going to show you how we do this. We have here five servers. As you can see here we have the servers in the firmware booting process. They are trying to get the IP address. And we only have to do a really simple API call to the Pixicore. And here with this simple API call now we have all the five servers in installation mode. We can check it out with another command. Yes, as you can see here we have five servers in installation mode. Moving on under console they are going to take their IP address and they are going to start the installation process. And now we are seeing how the red hat installation is continuing. It is starting and then in about three or four minutes everything will be ready to be added as new ranchers host. Here in the Pixicore we can follow the first process. Okay, we will see it later. So let's move on with the presentation. And then after a few minutes we will see that everything has finished correctly. Okay, let's move on with the second layer of our architecture. We have as I said a fixed architecture that consisted of a known orchestration virtual machine manager. We have also a fixed network topology that has difficult adoption of new open stack services. Especially if that services need to provision and in network. And we have even more fixed network topology with bottlenecks and lack of visibility in our services and inside the perimeter. So our response to this was to embrace the SDN also for the underlay and provision services using Kubernetes. With SDN approach we can leverage the power of distributed security and micro cementation also for our infrastructure apps. Moreover, the automation is easy. Thank you for with thanks to the APIs offered by the SDN provider and rancher and Kubernetes. Everything can be done programmatically and not relying on vendor specific tools and front ends and admin consoles because that's from bed pass, right? Only this slide was worth a talk in the party summit three years ago. We presented this network topology with quite a lot of subnets that consisted in a fixed architecture for the underlay. We wanted to evolve this to be more flexible. So we started to think if the SDN could manage also that part of the infrastructure. That was our response to that. As you probably know, overlay networks are commonly used for deploying isolated networks for each tenant in open stack. So if we have several projects, none of them will share the traffic between them if we don't want to. That's what we call, as you know, the user overlay. Using an SDN controller also for the underlay level and the infrastructure we can achieve the same functionality also for our infrastructure. So we don't need to provision new networks only for security reasons. Or, we simplify our number of networks from many tents to only three. The storage network, the data network needed for the overlays, and the SDN management networks. Thanks to the provision as containers of all services, we can deploy even the SDN components as containers dynamically. We only have to connect these containers of the SDN directly to the physical network using Mac VLAN devices in Docker. So we could also migrate or deploy a different container without breaking out the whole infrastructure. Some components of the infrastructure need to be run as a virtual machine. So for that we deployed, we developed a KVN container that gets automatically the configuration from the container and pushes it automatically to the container. At the end, we can manage the virtual machines as if they were containers. Indeed, we manage them as containers. Let's take a closer look to our unit of compute, that's the Docker host. Our Docker host are composed by a minimal Linux distribution with Docker and KVN, that's all. We deploy an open virtual switch component from Nuaz network called the BRS key that enables Kubernetes overlay network for the open stack components. All containers of the infrastructure are connected to this open virtual switch flavor. For the user plane, we deploy instances inside the Nova container, which has a libber services running on it. Here, we deploy another BRS, that's the BRS-B, also from Nuaz networks, that gives the connectivity to the end users instances. And now, Alberto is going to continue from this level of the infrastructure and will tell us how we deploy the rest of the infrastructure. Alberto? Thanks Fernando. So with the new approach vision container, we don't need more BN management tool. So just using Kubernetes, we can deploy all the open stack services as well as the SDN component from Nuaz. And however, the Kubernetes cluster installations could be hard if you don't have a container management tool. The advantage of using Rancher in that case is Rancher could deploy all the open stack services and all the Kubernetes components distributed between all hosts you have available in your deployment, in your environment. So if you add new hosts in the future, Rancher will deploy all the components needed to join to the cluster that you have in your environment. So Rancher could be seen like a pass because you can deploy your own application just with one click. Rancher catalogue is just a group of template for Kubernetes replication controller and port that you can import into Rancher. And you can deploy your own application. In our case, as Fernando said before, we have all the open stack services based on container and the SDN component from Nuaz based on container to deploy it using it. So now, let's take a look at the process. First of all, we start with a basic hardware deployment in order to get a Rancher platform when we are going to run our deployment. So once we have all available cluster in the Docker host, as a Docker host into Rancher, Kubernetes will be deployed between all hosts you have available in your environment. And after that, using a Rancher catalogue, we can deploy the SDN component from Nuaz based on container. And also, we have another Rancher catalogue template to deploy all the open stack services on top of that. And just to clarify, we don't use a specific host dedicated for the compute services. And for instance, you can get end user instances running in KVM in the same Docker host that you are running the client services. But anyway, you can deploy in a different way. So in our case, it's better this scan due to a different application that we are running on top of that. Nowadays, as you can imagine, talk about Kubernetes is not necessary, but in our case, we want to share with you our implementation and usage of Kubernetes. We are using a basic deployment of Kubernetes based on service, replication controller and pod. We are not using the new deployment Kubernetes, but we are just using the service, replication controller and pod. And the time to deploy all the services for open stack and SDN solution based, maybe in comparison with the all approach based on foreman and puppet, is considerable less. So we need less time to deploy everything. And so due to that, we don't need puppet or ansible. All the great functionality that we are using is the Kubernetes ITCD as a configuration database. I mean, using the ITCD for this purpose is a good approach in order to take out the configuration file from your container definition. So the idea is just to build a configuration file for your open stack services doing a merge between the upstream code template files, the sample files and our needs that are located in the ITCD. So let's be an example to be clear. With the basic deployment in Kubernetes cluster, you can get access to the ITCD from all pods. So due to that, we thought one thing. What happened if we use the ITCD as a configuration database for our environment? So the answer here is that with this scheme, we don't need to keep the configuration file inside of the Docker file definition and you just get this param from the ITCD. In order to do that, we design it a group of structure inside of the ITCD that we are going to explain now. For general purpose, maybe for the environment variable for other different open stack services, we use a special structure that you can see in our cases general, MySQL, key value, but just for the whole deployment in open stack. But in other case, for the configuration files that we are using, we need to separate the key value for different services because you know that, for example, different value like Keystone, Auth or something like that are very common in different services. So we create another structure that you can see separating in different kind of sections. Maybe in that case, we separate type for the controller and compute node. And service, for example, Nova, Glanz and file, NovaConf in that case, the section, where you are going to introduce your command or your specification for a specific service and obviously the key value. Actually with this approach, we don't need to keep all the configuration file inside of the container definition and we just get access from the ATCD to specify our needs for a specific open stack service. So finally, the use of Kubernetes as we said before is a piece of cake. So we are using service, replication controller and POT. The service for us is the point of contact for other open stack services. The replication controller always are used to keep always the same number of replicats that we need in our environment. And the POT are the minimum components in Kubernetes where the container are running. So now it's time to show you the way of building and deploying our open stack services based on container. So in the new approach, we are using the upstream code directly in order to build our open stack service in container. So just changing this pattern, as you can see for example, open stack release in our case we are doing Mitaka. You can build the distribution for the upstream regarding to this specific release. So if you keep the rest of the Docker file without this pattern, everything is work. So you can move, you can upgrade faster in order to wait for other possibilities. And when we have the container built with the right release, we have an entry point phase that in our case is the first of all we need to create the config file getting our parents from the ATCD and doing a merge with the upstream configuration files. The second step is to update the database definition scheme. After that, if we are running for the first time a service for open stack, we need to import into Keystone our endpoints and obviously create the database into Valera. And finally we can start the open stack service. So basically all tags as you know are based in Kola project. But we have a custom phase for the ATCD to get our programs. That is one of the reason that we are not using totally this project. So once we got more experience working with open stack, we wanted to contribute to the community. So we found out that the best way to see our changes quickly in the upstream was to collaborate obviously with the community. So we have to move just for the innovation team inside of the bank from commercial open stack distribution to the upstream in order to be faster doing this kind of things. So using the upstream code and changing just the open stack release pattern that you can see before, we are able to upgrade faster instead of waiting months for a specific release. So for innovation it is very important try to be faster doing this kind of things to get benefits of that. So obviously we have an agreement with Rehat and we are comfortable with Rehat and the support for the official facing cloud. But in our team in innovation team we are improving services which are not covered yet for the agreement. So now we are able to review again the three points of our mission so at the beginning. With container definition and using the environment patterns to force the upstream release we are able to upgrade faster. And with Kubernetes orchestration layer we've got a non-supervisor that scale out the infrastructure. And with Nuash to define both overlays the infrastructure and the overlay for the end user. We simplify our architecture. So now Fernando is the demo time. So let me one moment to prepare. Now we are going to see the installation we left in the first part of the presentation. If you can switch to the console Alberto. Of course. So you can see here that the five servers that we deployed some minutes ago. They are ready. They are ready with the login so they are ready to be asked to be asked as a new rancher host. Now we are going to see how we deploy a complete open stack cloud from scratch using Kubernetes and rancher. So first of all we are going to see that in our infrastructure there are five hosts. Five rancher hosts. Here you are. And then Alberto. Okay. One thing very important here is that rancher as we said before. Deploy a Kubernetes classes distributed between all available hosts that you have in your environment. And as you can see here we have kubalette different container distributed between all hosts you have. So if you are new host you are able to deploy as the same way the Kubernetes cluster. So other thing very important that we mentioned before is the catalog. Catalog. Okay. So we have here the possibility to import our catalog that is from jikap is public. So here we have our repo with the different Kubernetes template to define your open stack distribution, open stack services. So now we have the catalog here. Okay. So as you can see we have different components and we can deploy in a separate way. But for the demo we have a special only one open stack as Ruben here. You develop it for the demo because we don't have enough time to deploy separately. So we are going to do an only one open stack with all services. But it's just a group of different template for the different services. So now the first point that we are going to do is to load the ETCD with our needs to specify our open stack distribution. And this is just a one-off process and we are going to see our changes loading into the ETCD. Okay. So this is our specification for our open stack distribution. So here we have all the configuration that we need to define the open stack services like Nova, Glanz, et cetera. Okay. So it's ready. We have the ETCD with all our parameters so we are able to deploy the open stack. Okay. Okay. We are going to launch. So here you are. Here we have all the open stack services that we are deploying now. This is an orchestrated process so we take care of all the dependencies. For example, we start deploying first the database then we continue with Keystone and so on. Finally we will have all functional horizon service ready to be used as any user. Yeah. So now we are wearing, this is an orchestrated phase that we start for example with Galera. We are deploying now container for Galera after that rabbit and you know Keystone, Glanz, different services. And the last one is Nova Compute. So we are wearing in this screen we are wearing for the old services that they are deploying now. And we have a small wait until we can continue with the horizon. That is the last component. Just to be clear, we don't have any fixed and hardcore IP address. Everything is done dynamically. So this Nova Compute is waiting for any other services to be ready and then it will be installed itself. So maybe we have to wait. Keep in mind the time that we use to deploy the open stack. Because there are people in production environment that you know there is more time. And here maybe with 3-4 minutes we got a full open stack distribution ready. I hope. Yeah. Fingers crossed. Meanwhile we can recall also that everything is dynamic. So we can deploy this in our definition. There is one controller, one pod for Nova Compute. But for example we can deploy just changing a little parameter in our definition in our catalogue. We can deploy five instances of Keystone or Swift if we want. Okay, Fernando. Yes, we are ready. No, but keep in mind that there are a lot of services, Keystone, Glanz deploying. Yes. And we were waiting for the last one. The Compute node is deploying and it is installing. This is the book output for the Compute node. And I think Fernando we are able to try the horizon. I think one minute more but we are going to do no worries. Let's try it. So let me think. Exactly. We have to look for the IP address that Kubernetes has given us to the horizon service. I am pointing out the browser to that IP address. It is going to be deployed so it is possible that it will take a time. The idea is for the moment we have just one Compute node. Okay, Fernando. So once we have the exactly login with from horizon. But we want more. We have more. We are more ambitious and we want more to this talk. Okay. How was the phrase? I think we are going to use a script to load the network. So we have a completely open stack environment. So now we are going to create a new network as anyone will do if they had a clean open stack deployment. And after that we are going to upload a Cyrus image just to show you how we can create a new instance. Okay. So we do it from the CLI because it is faster and because we have the image in the infrastructure there. Okay. So now we are going to create a network. That right. Okay. So let's move on to the horizon. No. Okay. Let me create the Cyrus. Oh sorry. We are uploading the image for the Cyrus. Here we are. Okay. So now we are able to go to horizon to review again the image. After that, as you know, we will be able to launch a new instance. Let's launch a new instance. And then we want to show more. Special surprise to this stack. Okay. Launch instances. Oh. Yeah. Cyrus. Yes. We select the flavor, which image and so on. And after a minute or so we will have the instance already created here in our new deployment deployed for this five minute travel as you have seen. Okay. So that's cool enough. That's cool, but it's not cool enough. Yeah. Because we can do it better. Yeah. So let's go and maybe you can show what we are going to do. Yeah. We are going to scale our Nova Compute to two more. Okay. We are going to have three Nova Compute. And to do that is pretty clear that we are going to edit the replicas, the number of replicas in Kubernetes. And we are going to put three more. Sorry. And, okay. So as you can see here we have new compute node more. And in a few seconds they are available for, okay, this instance is running. Okay. And where is the admin? Okay. Sorry. Yeah. Okay. So hyper. So here we have just scale in our platform as you have seen with Rancher. So we switch from only one Nova host to three host. And dynamically we have it ready in our open stack deployment. So now we are able to deploy four new instances and we will see how each of these new instances is going to be located in each of these new Nova servers, Nova Compute servers. Okay. So Alberto, now it's going to create the instance there, right? Yeah. So two more. Okay. So it's the same. Flower, tiny. Okay. And the idea is that the scale there for open stack distribute all the instances between all available host. Yes. So once we have the, this instance is reading here in the list, we will be able to see how in the hyper visual view we can see how they are divide each of one in each new Nova Compute. Spawning for the admin project in the hyper visual they are running. And for the hyper visual you can see. Here you are. We have one instance on each Nova Compute server. So now I think we need to show the networking part to show you the way as Fernando is going to be so the way to distribute into the infrastructure layer and the overlay layer for the end user. So just we want to show you how we control the security using one password. Refresh. Ah, okay. Yes. Thank you very much. Thank you guys. Here we are. Okay. This is new release for Nuaz. Here we are. So here this is the SDN control panel where we can define all the, we can see all the networks and define all the ACLs, all the security related stuff. And here we are able to control all the topology in our infrastructure. So for example if we can select. Go to network. Yeah, here we have for example one network for glance and a new overlay network for glance. And here we have the container that is its network is being controlled by Nuaz. So here if we want we can define new ACLs for network. Yeah, here we have all the. Network here over there. Yeah. And here networks. Yes. We can see how all the, let me see. Yes. We can see here how we have all our open stack services deployed as containers. And we can, each of one, each of these are on its own network. And we can define, thanks to the micro cementation, we can define how they are communicating with each other. So we can define security rules for each service. Moreover, here we have also the chance to control the security in the overlay level. So we can control here also how the user instances are communicating with each other. So here we can define ACLs also for the overlay network. So with only one dashboard we can control the whole security in our whole infrastructure. From the user level to the infrastructure layer. So well that's all we have to show to you. Thanks very much for coming. That's our team. We're very proud of them because without them this was only an idea six months ago. And we make it happen thanks to them. And we want to thanks, obviously, Nuaz team and Paco Florian for the war. I put my hat. So thank you very much. So if anybody has any questions, there's a micro here. And if not, thank you very much for coming. And this is our repo inside of G Hub. So if you want to collaborate with us or contribute for the project, we appreciate your help. We have a micro there. Sorry. Yes, thank you very much. Really nice presentation. I like it very much. I'm with Nuaz. I have a question about Ironic. You use a pixie boots service for your bare metal service, but I believe you're not using Ironic. Why not? Have you considered it? We want simplicity. We show this pixie core software that is very, very small and it's very simple to use. And as you have seen with only one API call, we were able to deploy everything. Ironic has more than what we need for that. Ironic is cool for users to have not only instances, but bare metal servers to be deployed dynamically. But for our user infrastructure, with pixie core and boot config or with run, it's good enough. Another thing very important is that we are from innovation team. As you know we need to prove different technologies to be able to test. Pixie core and waitron are good for us, but obviously you can use Ironic. Thank you. No more questions? Thank you very much for coming here. Thank you.