 Hello, my name is Christian Rodin. I work for Produan. So I am the past lead engineer for the Global Pass Service. So let me introduce what is Produan. Produan is a global company. We belong to the Santander Group. Santander is one of the most important banks in Europe, also in South America. In Produan, we are more than 5,000 professionals in nine countries. We are in Spain, in Germany, in the United States, in Argentina, Chile, Mexico, Brazil. So in this slide, I would like to talk about what is the relationship between depth and ops. In a traditional IT, in general, in a bank, the relationship is not good. So in our case, we are two companies, one for the operation and another for the development. So that is a big problem. Depth and Ops works in silos, in general. When there are mixed problems in production, in general, the ops says, oh, the problem is on the Java code. And the development says, no, no, the problem is with the database, the problem is with the proxy, the problem is with the application server, the problem is on the Westfield side. So these are the most common problems in a traditional IT company. Well, in general, the depth sometimes doesn't know or understand the ops environment complexity. In a bank, we have a lot of components. There are a lot of components that are interconnected. So it's really, really complex to understand. The same for the ops. The ops, in general, when we receive a new application, for us, it's an application, it's a black box. It doesn't know anything about the runtime, how to tune the application, how to configure the application, and that's another problem. Lacks of communication between depth and ops. That is another big problem. And depth and ops have different objectives. In our case, as a TOLU, we are two different companies, one for production and another for development. So what were the reasons to adopt the ops in our company in general? These are the most common reasons. What share the same tools for depth and ops? Increase collaboration between depth and ops, reduce deployment time, minimize environment differences, improve time to market, platform tools, automate deployment tasks, provide a global service, reduce infrastructure and operation costs. That is one of the most important topics. One of the most important reasons, continue delivery, also is another important reason why we decide to adopt the ops approach, improve resolution time, and also more time available to add value, continuous improvement. So in order to adopt this DevOps approach in product one, we decide to create a service called Global Pass Service for DevOps. This service is based on OpenShift Enterprise, version three, and let's see the current status of this project. We have deployed the Global Pass that is based on OpenShift Enterprise. We have deployed this OpenShift in different regions. We are in Mexico and Spain. In Spain, we have two data centers and also in UK. In the next two weeks, we will finish the Brazil regions. All the OpenShift are deployed on premise, are deployed on Red Hat on OpenStack version Juno. This is our Global Pass architecture. We have a layer of load balancers. We have also three availability zones on OpenStack. We have a set of HAP proxies for applications, another set of load balancers for the OpenShift console, another load balancer for the S3 service. The next layer is the OpenShift master. We have a cluster of masters in different regions. In the same OpenShift master, we deploy the master service and also the ATCD service. The next layer is the OpenShift infrastructure nodes. We deploy routers and also the HACULAR service. We are not using the OpenShift registry. We use an external Docker registry. For the next layer is the OpenShift node. Here is where we run the applications, the containers. We have two kinds of nodes, one for production and another for non-production applications. We also deploy a self-cluster. The self is used for Docker persistent volumes. We also have a monitoring system for the infrastructure. We use a while introscope from CA for the Cloud OpenShift cluster management. We are trying to use CloudForm, but at the moment, we don't use CloudForms, but our idea is to use the near future. For the log management, we have a cluster of LST search. We send all the infrastructure logs to this central log repository. We use a jump host. From this host, we deploy the global pass infrastructure and also we manage the OpenShift cluster from this node. We deploy a monitoring solution for the applications. We use a while introscope. And we have an external Docker registry in each region. And we have a data lake for the application logs. All the container standard output and standard are sent to these data lakes. Our idea is to use this data lake not only for the container logs, also for the metering logs, HTTP access logs, et cetera. This cluster, this OpenShift cluster, uses external services for DNS, NTP. We use satellite for package management, also for configuration management. All the installation is made by Ansible and Puppet for configuration. In order to improve the DevOps productivity, we found that the OpenShift console is good, but it's not enough. So we decided to create some tools around OpenShift. So my idea is to introduce some tools that we have coded in ProD1. This is the first tool. This is the status page. Using this tool, we can get the cluster status. For instance, here we have the master service. We can get the routers, the F3, the SEF, et cetera. Also, we can see the status for every production nodes and also for non-production nodes. This is the issues and changes portal. We publish all the issues. We publish all the changes in this portal. If the change or the issue is really critical, we send an email to all the customers. For instance, here, this is the OpenShift upgrade to version 3.2. This is the uptime service. We have different views for the uptime. For instance, this is the monthly view. We have the information, the uptime for every month. And also, if you click in this red cycle, you can get the issues and changes for that month. This is a very interesting tool for the DevOps team. This is another view, this is a daily view. We can get the uptime for every day. For us, measuring and monitoring is really critical. We get metrics for all the OpenShift components. We get, in general, the uptime, the response time. And we use this uptime, this information. This information is used to calculate the service level agreement for our customers. We have another service, very interesting service called the Notification Service. This is a public service. Every Santander employee can subscribe to this service. If you want to get information about issues, changes in our platform, you can subscribe. It is a public service. For instance, we have different channels. For instance, change and issues, newsletters, docker, corporate images. And also, we have a global pass portal where we publish tutorials, touch notes, et cetera. We also provide all this information via the REST. Anyone can get the information through the REST API. So what is the customer opinion about the global pass service in general? The opinion is really excellent. The customer feel very comfortable with the OpenShift. OpenShift version 3.2 is very stable. We have more or less 3,000 containers and productions. So in general, the opinion is really, really excellent. So this is the DevOps global team behind the global pass service. We are a group of professionals from different countries, from UK, from Brazil, from Spain. And we are also working with two professionals, consultant from Red Hat. That's all. Many thanks.