 Hello, Stakers. My name is Sevald and I'm the Belbank Solution Architect and many enterprise companies are now considering using OpenStack to manage their legacy environments with pet applications. And in this session, we would like to share our experience making this reality in Belbank. For that purpose, me and my colleagues, Ivan from Murentis and Fjodor from ITK will give you a brief overview of project objectives, how it influenced on architectural decision and how we deploy it. First of all, I will briefly outline what is actually Belbank is and say a few words about project backgrounds. So Belbank is the largest bank in Central and Eastern Europe and number 102 in Forbes top 2000 biggest public companies list. We have millions of customers in a lot of branches across many countries. And more than 300,000 employers and more than 10,000 developers working in bank now. That's why we have a complex IT infrastructure and many applications supported our business. Almost all applications, sorry, almost all applications provided by ISVs or developed in-house has complex multi-type topology and I'm sorry for this. And in this slide, you can see an example of such application. Different companies for this application may run different platforms and different software, system software. For example, a database server usually runs on risk server and application server usually runs Linux on x86 platform. So that's why deployment of tests or development environment for such application takes quite long time, involves many people and is error prone. But now our business requires reduced time to market for new products and now more than ever. So that's why we set up the following business goal for our project is reducing time to deploy such complex applications, to automate the whole process of deployment for applications, enabling self-service portal for users with common IT services, provide detailed billing and reporting for use capacities, and increase efficiency by organizing dynamic IT infrastructure in bank. Our target service model for cloud is based on central service catalogue which is consists from complex services and complex services are built from basic services and end user may order complex or basic service from service catalogue via one click from self-service portal or is in a comment from a comment line. And service catalogue is always up to date and populated with supported applications. Whole project started from examining of existing cloud solutions that were this time on this market and six different cloud stacks were tested on a Sberbank site and OpenStack was selected as a platform for our target cloud architecture solution. Next step was to, sorry for this, I have no idea why it's happening now. So the next step was to deploy a few basic cloud regions with basic OpenStack functionality and collect the detailed requirements for our target cloud architecture. And then we start to design our architecture for cloud solution and deployment. All described requirements led us to complex multi-region architecture for our target cloud solution and this architecture is not supported from out of the box for OpenStack distributions and that's why we started designing our target cloud architecture with our partners Now I ask Ivan to describe in details the architectural solutions. Thanks, Eva. I'm Ivan, the cloud solution architect in Mirantis, responsible for this deployment architecture and now I will tell you how we designed this solution. On this slide you can see the overall high-level diagram of the entire solution. It's quite complex, consists of multiple areas, multiple components. So now we'll take a closer look into its main components, how it's actually built. Sorry we have some technical issues over here. The very first decision which defined the whole solution architecture was the under-cloud or a cloud approach here. Our target environment is heterogeneous and consists of many different platforms. On the other hand, we faced some strict boundaries set by project timeframe and available resources. So we decided to use layered architecture here with stable and static under-cloud, which holds multiple isolated control planes of individual over-clouds. It allowed us to quickly deploy the basement layer. It can currently work in multiple streams on multiple over-clouds with different networking storage in compute models while isolating all the possible deployment and operational issues. Sorry. We have some background noise, I'm sorry for that. Still dealing with technical issues. Just give us a minute to deal with it. Okay, thanks. Next slide please. But in this case, we still needed to provide cloud users with common transport user experience. There's also several over-clouds. So the second decision was to put all the common services like the Keystone authorization backend, like Horizon Portal and common repositories into separate shared services under cloud tenant. To support the layered Sberbank congressional structure and comply with the project requirements, we're using the main feature of Keystone version 3 here and custom role model with several Microsoft Active Directory authentication plugins. Highly available repositories of glance images and more under-packages are also located in this shared under-cloud tenant. Now let's take a look into specific OpenStack regions architecture. The most widely adopted platform for OpenStack is obviously KVM. We also realized that Hyper-V architecture for OpenStack integration also looks quite similar, so we designed it quite the same way. In Sberbank network, each rack is isolated into separate layers of the segment and this boundary-influenced networking design for all the regions here. Both for KVM and Hyper-V, networking is based on OpenVswitch and OverlayAVXLan networks, which are announced externally using OSPF protocol. Several standard backends for KVM we use SAF and EMC VNICS. I used it depending on the business requirements. So a subset of KVM compute nodes is also equipped with fiber-channel host bus adapters. SAF is also used for object storage here and has multi-tiered application with dedicated SSD pool for high-performance workloads. Fairly able to tell about this deeper in this section. Hyper-V uses Microsoft's software-defined storage solution introduced in Windows Server 2016 and is set up quite similarly. Sberbank also needed to provide their end users with fissure as a service. In KVM region, we are also using generic manila plugin for that purpose in KVM. The next region we wanted to highlight is VMware. Its architecture is obviously quite different. The obvious difference is then to be integrated with VMware vCenture instead of individual compute hosts. It gives us some extra benefits such as the ability to use advanced vSphere features, DRS, VMware HE, but provides less visibility into individual hyper-visor hosts from OpenStack perspective. As Overlay networks in VMware environment are not supported on OpenStack yet so we use some third-party SDN controller like VMware N6 or others. In this solution, we are using VLAN networking. This model is led to a DVS plugin instead. The HTTP agents are also placed on individual ESXi hosts here. As OpenStack managed controllers are located in KVM-based undercloud and in Metac release, it didn't support VLAN trunking. It seems to be fixed in Newton, but we need to investigate the case further. Maybe we'll adapt to this feature in the later release. As BearBug also needs to provide their customers with bare-metal hosts, Ironic was also designed and deployed here. Ironic architecture is pretty much generic pixie boot. Mechanism, we faced some restrictions in its current implementation. As Ironic controllers and compute nodes are located in isolated layer 3 segments, we set up the HTTP release on the networking equipment to allow pixie boot in multiple racks. Next Ironic in Metaca didn't support a Tern's network isolation, which is against BearBug security policies, so we had to design separate Ironic environments, separate regions for these isolated security zones. This issue is announced to be fixed in Newton. Still, we have two limitations still present. Neither port warning nor sender volumes are supported by community so far, so we are still unable to provide the customer with redundant Ironic networking and system volumes for the bare-metal hosts. Now let's take a deep look into risky regions, which are obviously most tricky ones. We'll start with IBM Power Integration here. Until recently, there weren't any acceptable means to manage IBM Power hosts from OpenStack in the same way that we do with X86 ones. The only way to do that was using IBM Power VC distribution, which is quite limited. It doesn't include heat or Murano, which we use extensively in this deployment. It was an issue until the situation changed a year ago when IBM released a tool to manage IBM Power Virtualization called Novelink. It supports Power hosts starting from Power 8 series and allows us to manage these hosts in a similar way that X86 ones from nearly every OpenStack distribution. Essentially, it's Ubuntu-based logical partition on your Power server, acting as a compute node. It contains some Python framework to manage IBM Power Virtualization technology. And Nova Neutron and Cylometer wrappers. It still has some limitations. Not all the Cinderback ends are supported so far. Overland ends are not supported, but it's a huge step forward. We utilize the technology in this development. And finally, let's take a look at Oracle Spark architecture, which is the newest one. It still has some work in progress. The most recent officially released Oracle OpenStack distribution for Spark is still kilobased and uses proprietary Oracle EVS networking stack, which restricts its integration capability for our project. It forced us to change the architecture significantly, so we started to discuss with the Oracle team how can we overcome that and how can we deal with it. We managed to deal with this issue, finally, by enrolling into Solaris 12 beta program. It allowed us to test the next version of OpenStack Solaris for Spark, which is still to be released next year. It's Mitaka and OpenVisage-based, so now we can integrate these Oracle hosts in the same way that we do with X86. So the entire infrastructure architecture looks quite similar for all the infrastructure backgrounds. These capabilities are still being tested by our engineering teams with Oracle support. So far we made the journey on the infrastructure architecture, but Sberbank's ultimate goal is to provide their customers, their developers and testers, not with individual hosts, but with application stacks like one several showed us before. Morana Engine, which resides in every single region here, is the key to open up these capabilities for this project. Morana is essentially the engine which allows us not only to deploy like Heat, for instance, but to manage the whole lifecycle of these complex multi-tiered applications, providing end-user with app catalog, and convenient way to order, set up, reconfigure and clean up entire business applications. In order to support that, Morana is based on object-arranged class model, allowing administrator to declare the dependencies between entities and model complex multi-tiered applications. On this slide you can see the classes hierarchy implemented so far, which includes common infrastructure library, basic operations system classes for Windows, for Red Hat, Sberbank applications like Oracle Database and IBM WebSphere, and application classes like WebSphere Cell. Some of these tiered applications are cross-platform. The most common use case is to deploy databases on risk-based servers. The substructure should also be reflected in UI, allowing specific platform administrators to track resources used by these application deployments. To support this case, we enhanced Morana with advanced multi-region capabilities using Morana plugin model. It allows end-users to order multi-tiered application from the parent environment, and all the needed child environments are being automatically deployed in specific regions, where actual resources are located and can be tracked by cloud administrators. Now, let me finish the solution architecture overview and further our deployment team lead. I will tell you a bit more about how the solution was actually deployed. Okay, thank you. Thank you for the clicker. Hi, I'm Fedor and I'm lead of the deployment team for this project. I'm going to give you a brief overview of how we did this deployment from its beginning to production. To simplify, unify and speed up the deployment, it was decided to use fuel as a deployment automation tool. You can hear a lot about fuel at the next door, but I just want to say that it was really useful for our project and highlight a few things about fuel's deployment tool that has option to provision hardware server, install operating system, and configure OpenStack. Fuel9 has new cool post-deployment capabilities, so now we can install and upgrade additional features on existing OpenStack environments using hotplugable plugins that plugins we get from community or create a new one as we did for this project. For this project, we had eight different regions of OpenStack, one for undercloud and seven for overclouds, and it's really hard to install them manually if you must know that if you did that. In fact, we could use one fuel installation to deploy all these regions, but after all architecture sessions, we decided not to do that. Now each installation has different features set, plugins, and their versions. It can even have different versions of fuel and different versions of OpenStack, and of course we can deploy and redeploy them at any time. For this project, we have a nice solution that helps us in rebuilding a set of repositories, images, and packages. So when we upgrade and install new regions from scratch, we get all required packages and images from the place that is close to the install, not from the Internet. For this project, we do not touch triple or containers, so every region begins with server provisioning that we do via PXC boot. Early for successful deployment via PXC, we should use dedicated PXC interface for the PXC boot. No bonds, no links aggregation at all. And if you had just two 10 gigabits interfaces per server, it's really wasteful to use one of them that way. And we tried and succeeded using such networked feature as link aggregation control protocol fallback or standalone mode. It really depends how switch render called them. We tried and verified this technology for Cisco Huawei and Arista switches, and now everyone is happy. We do not need to... We do not need to search over the storage for additional network card, and network team does not need to buy additional gigabit switch or SFP modelers. And we can utilize all performance of the bonded interface, balancing traffic between them. Also for this project, we used additional storage, which is provided via fiber channel loans, of course, with multi-passing. After successful hardware provision, we install open stack with preconfigured roles. There are such roles as controllers and computes in default fuel installation, but for this project, we needed to consolidate them, and it's possible not only in DevStack, also in production. It was acquired for us to detach some roles that were used for... We needed to detach Keystone because we need dedicated Keystone service for centralized point of authentication, where each region can access centralized Keystone again and get a token. We have very customized horizon installation, so it can be detached, upgraded at any time, leaving other regions intact. Using plugins, we can minimize additional configuration that required to set up environment. Also, we can not just do pre-deployment customization, as, for example, attaching to external Keystone. We can also do hard-playable post-deployment customizations and installed custom open stack packages on already working installation. For this project, we have a really nice solution. We have modified L3 agent for dynamic routing using Quagga, software router for Linux. So now all tenant networks are dynamically routable and we don't need to use floating for them, floating IP addresses. Because we have customizable templates for Quagga for this plugin, we can configure any routing protocol that your hardware supports. For this project, we use OSPF, but we can use BGP or any kind of routing protocols that Quagga supports. Not only know the roles are affected using plugins. For this project, we created plugin that can create different CIF pools and different sender backends from the box. And now for this project, we have multiple CIF tiers, like high-speed storage on SSD disks and low-speed storage, just common speed storage with generic hard disk drives and SSD cache. And users and administrators can have them from the box. Also, we upgraded the EMC Linux storage plugin to support 4.0.9 and added capability to use external fiber-channel storage. So now we created runbooks for all deployments, from QVM to HyperV. So each deployment can be easily management, repeated or recreated in the next data center. Plugins really help a lot with that. They do a lot of customization that we had to do manually earlier. Automation helps us to reduce misconfiguration code and speed up the deployment. On this slide, you can see today's state of all regions and all deployments that we made. We already finished with undercloud, shared services and QVM regions, and successfully tested functionality of basic monitoring and basic Morana I applications. We have deployed and testing such regions as Ironic, VMware and IBM Power. And we're still in progress and with plans to finish till the end of these years for the regions like Oracle and HyperV, providing extended functionality like complex Morana applications, advanced monitoring and UI enhancements. Now I'm passing my clicker to Seva and he will tell you a bit more about organizational aspects of this project. Thank you, Fyodor. On this slide, you can see a whole project timeline and because of limited time, we run multiple streams in parallel. On a daily basis, we organize scrum meetings for our teams and between team leaders. On a weekly basis, we organize retrospective sessions. The many results we expect to have by the end of this year when our target cloud solution must go to the production and host most development and testing environments for our developers. To make it doable, we organize a few more work groups for this project. It's PMO and Global Architect Group, actually responsible for whole project strategy and governance. And of course, all architectural solutions are designed and approved by these groups. And development is split into four groups. It's UI, Development Group, Morana Application Developer Group, OpenStack Cloud Region Deployment Group and Documentation Development Group. And from our bare bank side, we also organize two more groups. It's Development Group, it's a new business unit who actually responsible for third-line support and integrating our cloud solution to existing bank infrastructure. And Operational Group, who consists from highly skilled engineers from different technical departments and responsible for first and second-line support and whole cloud maintenance. Our development team has additional responsibilities to communicate with the community and provide most valuable results to the community. As our dev team lead, Dmitry Plakhov, who is also here in this room, promise me, we will provide most valuable results to the community and we also have an example of a committed bug from outside. And we will share, of course, cold reviews and blueprints with most valuable functionality for us. Other development results will stay in-house because they are more relevant to its UI changes and integration part. And actually, we are finished with our slides and now we want to show you a quick and nice demo of our target cloud solution and how it looks like to the end user. Just a few seconds. Sberbank Cloud, powered by Merantis OpenStack, allows you to easily manage heterogeneous infrastructure. Now, your teams can provision virtual resources on different platforms in a few seconds. Sberbank Cloud provides user experience similar to well-known Amazon web services. You've got full control over resources, service topology, and key metrics like utilization and service health. Also, you've got catalog of basic and multi-layer applications available on demand and ready to be deployed in a few minutes. That allows your agile teams to speed up decision-making, product development and delivery. Sberbank Cloud is the key to enable new service lifecycle model. So, actually, we are finished and thank you for your attention and we are waiting for your questions now. So, please. Hi. Thank you for the opportunity. I always heard people asking for me, how is the relation of the future of cloud and the old good legacy mainframe stuffs? And you from the bank, I don't know, do you have this legacy infrastructure in your cloud? And if you do, how do you integrate and do you have plans to integrate or move things from the old mainframe platform to the cloud? Yes, thank you for your question. Yes, of course, we have vision for our future architecture and many applications are now developed on Linux X86 platforms, but we still have all legacy applications and I say about this, that we need to support them, need to develop them. So, it's much more efficient for us to automate the process of deployment of existing applications and all stuff to X86 now. Hello. Thanks for the presentation. Regarding those, let's say, legacy platforms like Solaris or existing VMware, did you build or do you plan to build the new infrastructure for VMware and you will in the future deploy application on top of it or will you be able to migrate existing VMware infrastructure under the cloud management? Okay, so now we have three hypervisors. It's VMware, Hyper-V and KVM in our environment and for now we use all three of them and of course in future we may use maybe KVM mostly or maybe no VMware but our target cloud architecture must consist of three regions and in case if we have efficiency, business efficiency for using, for example, most KVMs, we will migrate our loads to KVM. It's about efficiency only but functionality of our cloud solution must consist of all three regions. Thank you for your presentation and one question about plug-in that you make with OSPF and Quagga for virtual routers. Is it pocket like Neatrom plug-in? Is it already open source? Or it's still in house development? If it is in house development, will it be open sourced in near future? I can answer that it's in house, it's in development, it's like package we just replace L3 agent with new one that we created and about its open source release procedure. I think it's a question for Desbergbank if you know or don't know, do we have plans for releasing? For now actually we have no such plans but if its functionality is valuable for community we are considering about sharing to community of course. Great, thank you. Thank you. I have a question guys, do you already run some, do you process some production data on apps running on OpenStack or just plan to? Actually we use our cloud now only for development and test environment and we have a few pilot cloud regions as I told in the beginning of this presentation which already support more than 100 projects and provide test and development environments for these projects. So there are some real customers and second question about containerization do you guys consider looking into it? So now we are testing dockers in our cloud and we have already, our development team already have run application for dockers regarding using dockers for as main virtualization technology we have no answer for this question now but maybe in the future we will use dockers mostly. Okay, thank you. Thank you. Thank you very much for your attention.