 Well, hello everybody and welcome again to this OpenShift Commons briefing. We're really pleased today to have the folks from Kodaban with us to talk about their journey from PAPAS and to OpenShift 3 on OpenStack, so there's lots of great information here. Some of you may have heard the main presenter, Christiane Ronaldo, talk previously at Red Hat Summits about their journey to OpenShift 2. And that was an awesome presentation, so we're really thrilled to have him back to do this talk and tell them how the journey has progressed. And so what we're going to do is we're going to give them about 30 minutes to talk about their experiences, their lessons learned, things that they want to see in the next generation of OpenShift. And then you can ask questions in the chat room. Everybody is muted except the speakers. There are some folks from Kodaban with us, so they may answer the questions in the chat for you. At the end of the presentation, any unanswered questions you'll ask out loud. And we may repeat the questions you asked in chat because chat isn't saved in the recording. So without further ado, Christiane, I'll let you take it away and introduce your team and let's get started. Hello, hello everybody. My name is Christiane. Well, the idea for this presentation is to talk about how we have deployed OpenShift version 3 on Red Hat OpenStack. So let's start. Well, who is in this presentation? Who is who? Well, my name is Christiane. I am the past project manager. Here with me is also Pablo Alonso Rodriguez, who is our principal past engineer and also Eduardo Mingues from Red Hat. He is our senior architect. So let's go to the next slide. I would like to talk about what is Pro-1? Well, what we do? Well, Pro-1 is a global company. We belong to the Santander Group. Pro-1 is an IT company for Pro-1 for the Santander Group. So we are more or less 5,000 professionals around the world. In this slide, we can see that we are in Mexico, United States, Brazil, Argentina, Chile, Spain, Portugal, UK, German, et cetera. So we are basically a global company and belong to the Santander Group. Well, in this slide, I would like to explain what is our main objective using OpenShift. Well, we have to design, deploy and operate a multi-region-prevade past solution. This past, we are using OpenShift version 3. Also, we were using OpenShift version 2. But we migrated to version 3 because we love containers. We think that container is a great technology. That's the reason why we are using OpenShift version 3. This past solution, we use this past solution for a financial application. So for our perspective, this past is really, really critical because we are running financial applications. So for us, it's a big challenge in order to have a very stable past environment for this kind of application. It's a really big, big challenge. So why this services multi-region? Because we have banks in the United States, Brazil, Argentina, Chile, Mexico, Portugal. So we have right now five regions. OpenShift is currently deployed only in two regions in Spain. But for the next months, we will have the Mexico regions. We will have two OpenShift deployments in Mexico. Our regions, each region has three availability zones. We can see in this slide the different regions. Also, we can see the availability zones. Well, in this slide, the idea is that I would like to talk about our OpenShift architecture. We can see that we have several layers. The first layer is low-balancing. We use a chain proxy to balance different kinds of services. We have here the different kinds of services. We balance OpenShift console, OpenShift API. We balance Internet applications. We balance S3 services. So we have deployed the low-balancer and three regions. Here we can see the availability zone 1, availability zone 2, and 3. The three availability zones are deployed on OpenSack. The version that we use is June. The next layer is OpenShift Master. We have three OpenShift Master for each region. Every OpenShift Master is deployed in one availability zone. Also, in this node, we have the ATCD service. The next layer, we have three or more nodes for OpenShift infrastructure. We use these kind of nodes for routers, for calculus. Also, we could use this kind of node for OpenShift internal registry. The next layer, we can see the OpenShift service. We basically are OpenShift nodes. These nodes are deployed in three availability zones. We have two groups of nodes. One group is for production, and the other group of nodes is for non-production services. We have several nodes in each region. We deploy new nodes every month because we are growing more or less 20% every month. We also have persistent storage, persistent volumes. We use SEPH. We have a SEPH cluster for our persistent service. We use this persistent service basically for MySQL, Postgres, Cassandra. Also, we have activated the RADOS gateway for OpenShift storage. For monitoring, past infrastructure monitoring, we use Wiley introscope. We have developed a very big monitoring solution. We monitor every past component that we have deployed in each region. Probably in another presentation, I would like to show our monitoring solution. For reporting, we have CloudForms. We deploy the CloudForm nodes, but at the moment we are not using the CloudForms solution for OpenShift. The idea is to use this service soon. For OpenShift infrastructure logging, we are using Elasticsearch's Kibana log-stache solution. We use this log-management cluster only for the infrastructure logs. We don't use this cluster for application logs. Also, we have a special node called SHAMP host. Through this host, we deploy the OpenShift cluster. Eduardo will talk about how we deploy the OpenShift infrastructure using Ansel and Puppet. It's a very interesting topic. For application monitoring, we have a special cluster. We use a Wiley introscope for application. We have three nodes, one node in each availability zone. Also, we deploy three Docker registry for one for Europe, another for Mexico, and the last one for Brazil. Also, this architecture has a special data lake, a big data cluster. This big data, we only store logs from applications. We develop a special console. We are not using Kibana. We develop a console that is integrated with OpenShift security in order to get the logs from the different projects. Basically, this is our architecture. It's a very complex solution for authentication. At the moment, we are using KELDA, but we are planning to migrate to OAuth 2. We also use external services from Red Hat, IDM, DNS, LTP, proxy, satellite, and Capsule. In the next presentation, Pablo Alonso will talk about networking as the SDN wash plus OpenStack. Thank you very much. We use, as we have already mentioned, Red Hat, OpenStack, and NWASH SDN. We deploy over on top of them. Why do we use an OpenStack infrastructure as a service? Many of you may be familiar with typical infrastructure as a service advantage. It's your VM management, and so there's nothing new to say about that. But because of some special requirements of our applications, at the moment, they should be in-house. This is one of the reasons we decided to have an in-house solution instead of using a public cloud. Why did we use NWASH SDN? If you don't know about NWASH, it's an SDN made by Rolcatel. It's a very complex and good SDN solution. SDN stands for Subword Defined Network. If you know about OpenShift, you may already be very familiar with that because OpenShift has an SDN. We use it because it allows us a very easy network management, easier than in the traditional world, of course. Our networking environment is completely isolated, so we can decide what comes in, what goes out, and nothing else. We can avoid many IP addressing problems. The biggest advantage is that we can define very fine-grained traffic ACLs. It means firewall rules, we can say. From IA to B, there can be communications like TCP ports, AT443, and nothing more. We can restrict in a very specific way which traffic is allowed and which traffic is not allowed, which is somewhere known as microsegmentation. And the last but not least, we also use and leverage that NWASH allows interconnecting OpenStack projects in a very flexible way. And we use that to access some vacancies we need to do and within a more secure way than just going out and in again by our internet. So this is a very good solution and we are happy with it using NWASH SDN at the infrastructure level. There are some aspects that can be improved in both OpenStack and both NWASH SDN and the OpenShift networking by using NWASH. I mean, as I have already said, OpenShift has its own SDN. So if we deploy OpenShift on top of OpenStack and NWASH, we are deploying one SDN on top of another SDN. This is not a good idea and also there are some things in OpenShift SDN that could be improved. It can allow some very basic networking configurations. On the other hand, NWASH allows very complex configuration. Because of that, NWASH people are developing a NWASH plugin for OpenShift. I think there are some traces of that in OpenShift or you can call that I'm not sure. We have recently participated in a proof-of-concept of this plugin, which is in a very early development stage. We have participated in that proof-of-concept and we see also that they have already been developed. They have already offered a bunch of interesting features, like having one subnet per project. If you are not familiar with OpenShift SDN, it allocates a subnet per node. It allows isolating networking on projects. You can have network connections between project bots on project A, but not from project A or project B. You can connect project C to project A and then connect bots and then bots from project A can talk with bots of project C, but not with bots of project B. This is completely decoupled from subnets because subnets depend only on nodes. That can be something difficult to think about if you come from the traditional networking world. They offer using one subnet per project, which is more logical and allows them to do some advanced networking configurations. The most important feature I think I should have put it first is that it allows all the most advanced configurations to perform a very fine-grained configuration at the OpenShift level, as it already does at the OpenStack level. And one important feature we are very happy with is that it keeps OpenShift Enterprise experience valuable. It means if you use OpenShift Flash, users will see the same, that if they were not using Flash, unless you want to configure something else. Then of course you will see something different, but not if you have doubt nothing. We could provide them some feedback, like we were very interesting for example in turning into tuning the solution to allow users to have some service related to networking configuration. One of the advantages of SDNs at the OpenStack level is that you have self-service. No one from the infrastructure has to touch and wash so that you can configure your network. It would be interesting to have that kind of self-service that your passive user have that kind of self-service on their boards. And so other and other minor aspects. Of course this is in a very early development stage, they are willing to develop another big bunch of new interesting features in the future. Like for example a way to avoid the double overlay. It means to have two SDNs when you deploy OpenShift with no ash on top of the stack with no ash. So we look forward for news out of that because we found it very interesting. Now our teammate Eduardo is going to talk about our automation and how and the steps we follow. Hi everyone, this is Eduardo. I will try to be as fast as possible and I would like to talk about our automation that we have been working on the past few months. Okay, as Pablo and Christian talk about the architecture, we run OpenShift version 3 on top of OpenStack plus the no ash SDN. So we first started with the OpenShift version 3.0 which we deployed using manual instance from horizon. And we started to think about what we will do. This was our learning path, we faced a few issues and we learned from the way on how to deploy OpenShift from the proper way. We need to do a lot of things for our particular environment. And we faced some restrictions, speaking about networking and things that makes us to customize a lot the OpenShift installation. For us also adding new nodes was pretty crucial because we started to grow pretty fast. We started with a few nodes and then we suddenly faced that we were low on nodes. But as we customized a lot the nodes configuration and a lot of things, we couldn't run the Ansible installer with adding some nodes. So we started to working on heat templates and the pretty basic script that will complain all the prerequisites like the Docker installation and all our customization. We call the script the common assets and we started to work in the authorization. But the thing was that we didn't have any configuration management so there were servers that were deployed with some configuration and other that were created before that didn't have that configuration. So in the last one we faced the snowflake server that we make a lot of things by hand, like for example the last one was expanded slash bar file system. And we want to automate new deployments so we started to think about how do we automate these things. We started to think about having some golden OSP image that we will clone for every node or maybe heat and clouding the scripts or maybe PXC for deployment based instance. We also were thinking about configuration management with Ansible or PAPED and also the PAPED lifecycle management with satellite or maybe to itself. And finally the winners was Ansible and PAPED using satellite. So we started to build some PAPED modules like the ones that are in the slide for base stuff like base packages, DNS and TP, a lot of things that are common for all the stands in our infrastructure. And also for things like managed docker configuration, for example, the docker storage plugin, the proxy configuration, things like open files limits, things like that. And also for the open sieve nodes and open sieve masters. So we do all these PAPED modules thinking about parametrization to be able to modify a lot of things with just a simple parameter modification or that kind of things. We use HiERA using a Git repo backend so we are able to track modification on the parameters. And this HiERA and the PAPED modules are deployed in Capsule, which means that we use satellite to manage all our current configuration. And for Ansible we created a few playbooks. The first one was just to deploy a basic instance, which means create the open stack instance, giving the instance a floating IP, the correct volumes attachment, such and such. We have an internal DNS, which Christian talked about before, and we need to register the instance in that DNS. Register the instance also in satellite to be able to apply all the modules. Apply the modules, run and zoom update to be on the same content view in all the instance and review. So with this basic provision playbook, we have a proper base server and it will be customized later according to the role. We created also a custom playbook to add a new node to the OpenCIF cluster, which means the tasks that are on the slide, like, for example, create the node certificates, copy the node certificates from the master to the node, and move the host to the proper satellite host group and run the PAPED, which means that all the PAPED modules attached on the OpenStack, on the OpenCIF node are run, all the Docker configuration, OpenCIF and these kind of things. We also set up proper OpenCIF labels, we set it as non-schedulable, and we pre-pull the Docker images needed for our infrastructure. Finally, we reviewed the node, and the node is ready for the schedule label, and that's it. So for the current procedure of installing a new region, we use the custom playbook to create all the OpenStack instance, not just the OpenCIF ones, but also, for example, the monitoring ones. We install the OpenCIF using the default OpenCIF and C++ playbooks, but we just install the masters and the ETCD, because we use our custom playbook to all the nodes. And that's pretty much it. We have a few list to do features, which are, for example, automate all the post OpenCIF installation, like the customization of the ECCs or deploy the routers and such. We want to have one single button to be able to deploy one region with just a click or just a script call. And we also are working on how to make the OpenCIF ecosystem, like, for example, the monitoring infrastructure, the elastic search, the persistent volumes creation, and these kind of things. And that's it for my part. Thank you. That's all. All right. Well, that was a lot, actually. So thank you all very much. There have been a number of questions in the chat room. And I'm wondering myself a little bit. You've done some custom work here, and I'm wondering how much of it are you looking to push back into, like, OpenCIF Origin, the open source project under the hood? Is it possible to do that, or is the custom work that you're doing with Ansible and Puppet very particular to your configuration? Well, we are thinking about that, but at the moment we don't have any concrete idea for that question. Okay. Well, I'll keep pushing you on that, seeing as how I'm doing the community management. I'd love to see some of these lessons learned, incorporated into the, so everybody else can use it as well. So one of the questions you go ask was you're using Ansible and Puppet, and could you do these things that you're talking about, these customizations with just Ansible? Well, that's a great question. Yes, I think that it is possible. It's my point of view. We are thinking about that. We probably, for the next generation, we'll use just one of them. I don't know which one, but I can say that Ansible is really easy to use, easy to learn. But we are thinking about that. There is another question that just came up. Do you run separate OpenShift systems for SDLC environments, DevTest, Integration, Prod? No, we do not run OpenShift systems. We have separate production and non-production applications are separated in the same OpenShift instance. They do not run in the same nodes, but we have the same cluster. We deploy just one cluster that doesn't do it. Yes, the idea that we use the same cluster for all the different vitamins, but the production, we have special nodes only for production applications. And I'm going back in the chat a little bit here. For SSL or SDN, are you using wildcard certs or installing one for application? We are using both. We have a default wildcard certificate, but if the application wants to use its own certificate, it's also possible. And there are a few questions about, what are you using for your Docker registry? Are you exposing the one that comes with OpenShift, or are you standing up something external like a factory or a nexus? We are trying to use core OS and the brand registry. Go ahead. At the moment you're using... Yes, we are using OpenTools Enterprise 3, but soon we will deploy the core OS Enterprise 3. Very interesting how you managed to use some of the best practices from different offerings. That's really a nice overview of everything. There's another question in here. Are you using LDAP to authenticate users into OpenShift or for users of applications hosted on OpenShift? We use LDAP for OpenShift users. What's used in each application is a responsibility of OpenShift users, which are the developers. And we have nothing to say about that. Let's see. Are there any other questions out there? I think we've answered most of the ones that have come in. There's one here. I think we answered it in the talk though. There's a new question. How do you authorize users for OpenShift? Well, I don't understand exactly the question, but we are using LDAP. We use the authentication and authorization modules that OpenShift provides. I think for users, they need to belong to a proper LDAP group to be able to log in to OpenShift. So, if any user from Progoban wants to use OpenShift, they need to be added on that particular group. Yes. We have a very big solution for ALM, and we share an onboarding procedure for each new user. There is an onboarding solution to create the user in LDAP and also to add that user to a specific LDAP group for OpenShift. So, I'll ask you guys, what are the new features that you're looking forward to in the coming releases of OpenShift? Excellent question. Performance, performance, performance and performance. Yes. We would like to do some vertical scaling. So, we are waiting for the next version, 3.2, that will use Kubernetes version 1, 2, 3. And the whole limit will be, I think, 110 pods per node. So, for us, the most important feature is performance and vertical scaling. The main fact that we have, we could be low big VMs with many pods in each VM, but we cannot do it not because of an infrastructure limit, but because of an OpenShift limitation. OpenShift does not suffer more than some quantity of pods, regardless of how big your VM is. And the next version is going to fix that, and that's what we are looking for. Right. Well, we're all looking forward to the release party for the next one, too. So, coming soon, and we're hoping that as it comes out the door, you'd be one of the first ones to do some testing and review and feedback on that as well. And I know you guys have been involved prior to the release. So, thank you very much for all of your efforts working with us on the OpenShift project, and also for coming here today and talking with us about your production multi-regional deployment. I learned a lot, and I really wish we could get you in front of the entire OpenStack community in Austin soon. So, I'm going to see what I can do to figure out how to get you guys to Texas. Hopefully, we can figure out how to get more OpenStackers using OpenShift as well. This is about the third production conversation I've had with people about OpenShift on OpenStack, and it's really brilliant to see it working as well as the SESH stuff. So, this is great stuff, and people have questions for them. You can log into the mailing list on OpenShift Commons and ask. I will try and post this recording pretty quickly, and I'll ask Christian to send me these slides so that people want to take notes on that and add those to the mailing list as well. Thanks again, everybody. Have a nice evening over there in Spain, and everybody who joined us. Thank you very much. Anything to dig into? Take care, guys.