 Okay, so I think that there are many people eating lunch and I know that the line is very very long But times come so let's be Hello everyone, I'm the present session about how we bring open stack to The financial industry and this session with our present use case, which is the first put open stack in the Production and the mission critical application in China financial industry as far as I know First let's introduce ourselves And one way come from United stack, which is the lead of open stack service provider in China United stack founded in 2013 and now we have a hundred of employees and a hundred of clients and near to Including and our customers appeared many many industrial including Internet financial manufacturer and IDC essential. We can find some our clients sample in our website Ustyle.com Okay, let's talk about Hongfeng Hongfeng Bank is one of 19 national joint stock commercial bank founded in 1987 and now Hongfeng Bank has almost 10,000 employees and in the recent research report Which published by the Chinese University of Hong Kong? Hongfeng gets ranked in ASA is Fifth which is I think is a pretty good result and now Hongfeng has tire 14 Taiwan branches and 279 affiliate agencies as Of the end of as the end of 2019 Hongfeng Bank total assets of 1 trillion yuan and increase of 226 percent annual night profit of eight point one billion Yuan So that's a brief introduction to Hongfeng Bank as you can see though Hongfeng Bank is not as large as like ICBC or was foregone or JP Morgan or something but it's But it rises very quickly and its capital scale is pretty large every minute that system down may cause hundreds of loss Before we talk about open stack Let's get a glance of how the deployment in grounds in the open down the in the Hongfeng Bank before It's a long journey because this is a banking service Every stamps need added check and the check again Before the application is ready to the public we need to prepare the DS name as the firewall policy the load balancer rule and as a monitor rule set up DB and Decide which region we need to deploy Set up VM including operating system middleware runtime and so on besides we need to configure VLAN NAS and then other network or storage of things and Since the some some resources are limited like a network VLANs or storage So you the resources you need maybe not can get ready at once so the time may be ticked much longer and Since it involves many teams So the response you want to get maybe it cannot get really quickly To resolve the problems we said before we can summarize the design goals of Hongfeng Bank private cloud First is the high availability We will talk about RTL and the RPD. They expect in the following slides. It should be reasonable and There are two some philosophies that we need to follow One is no SPOF and the two is geographically Dispaired Whatever we meet system down in 1D say or network interrupt in a thing to it to want It wants the application down for the core services The second is effective. We want effective system Since we won't want a human driven system, but a API and system driven API is the first a civilization and most operations should be automated so that we can avoid the service stop by human cause Third open architecture since it's a heterogeneous platform not only VMs But also bare Mentos power VM and so on wonders are a lot. We must put all of them to one cloud This is overall architecture of Hongfeng bank. We got Two DC's named major center local center and the remote center the major and the local is located in one city and the remote stand remote center is located on another center and We have divided each center to some regions named BU X or DMZ X There's one reason for doubt or test it can provide a doubt or test for doubts in Hongfeng bank and it can do some POC before we want to change the configuration and Put some new services Here's a table shows that Hongfeng bank Requires For the continuity we can say that there are four ranks of five plus five four and two each one has a different RTO and As you can see five plus and five need near to zero RPO We use as many common technologies like data copy database backup Essential as you can see that the continuity is is a big problem that we cannot achieve only by OpenStack HA, but also need other helps like database or storage system or Some application developing OpenStack HA solution is pretty common since you can get the advice from the OpenStack OpenStack official doc or the some other websites The cinder volume is pretty annoying that it cannot be a active active service All service are managed by a shape proxy and the pacemaker and my real DB close cluster managed by glara RabbitMQ enables durable and kills It's pretty common, but Which any compute node is pretty different We know that pacemaker is good and the pacemaker is recommended in many documents, but It's not fully meet our requirements. So we have implement another tool named rock details will be talked in the following slides Okay, this is about OpenStack then we all talk about the app development Please how a deep app or deploy on the Hongfeng bank internal cloud There are many components like DNS name or load balancer web server database Storage and the bank up a request well through all these components and add all this above is always on this This deployment sample is for the five and the five plus applications as I mentioned above if the application is Four or two rank each well in another deployment, so that's not so complex Storage will be back up to local and the remotes and we have tap tap of bank up as a code bank up the The orange orange block box seems means it is not a very active active application and The red line see means it is a sink and the green line means it is a sink back up Okay, here. I want to talk of some Crack theoretics features in Hongfeng bank Including for redundancy industry as the integration at the excellent network and the follow-up team automate First is still HAA The Compute Node HAA In the HAA guide of open stack we can find this picture. It is pretty same that I shows above But I want to point that Piece maker is good, but it's not very clear and a sample to you simple to use Our operate once it's a more configurable flexible HAA software So we present another Compute Node HAA solution and this is including the in U.S. 3 U.S. 3 is the open stack of distro of Powered by United stack first the name of our computing HAA Management software is rock. It isn't the rock music rocks, but the earth rocks rock We take this name because we want to open stack will be Stable as rock never move or change as you can see that there are two components in rocks architecture Since we do not want to make it complex rock monitor will monitor the metrics we interested on in like know our compute service like network connecting and OSD process status the data that some rock monitor Gats will start in a database and the rock engine will gasses data is from database The policy engine will read the policy from fells and match if match policy to data the workflow will be triggered and It will run something like a fault to install it or alarm or instance evacuation and Results conference The structure isn't pretty same to the Open stack HAA by Peacemaker, but it's more clear and easy to configure over all can just Change the policy fell or the workflow fell to make the Open stack HAA do as he wants It's easy to deploy and understand. So that's why we want another This This software is now open source than our github GitHub.com slash you'll have this stack We want we will try to put it into the open stack a big tent to make it another choice for Open stack HAA The second thing that I want to talk about is a SDN We use the Cisco VTS SDN solutions since we know that the new trans performance and the reliability are blamed We have a high demand of all system especially Reliability of network all our system is Distributed it's connected with network and the some are not very Not I got poor portion tolerance Once network down to public it means the service will stop to the internal it will be a mess so network is a good important point to our Open stack has some network solution like DVR and out of population, but it have some problems like Out of population will force influence the network by over as flow Some normal network action want to work in the DVR Like if you get a keep live D on the open stack It won't get the VIP switch because the nutrient needed to know the port Biden relationship, but The VIP switch is do by keep live D cannot be known by nutrient Why we do not choose ACI the Another SDN solution powered by Cisco is cause the ACI is too complex and in too many new concepts We don't want to change everything in one night What we want is just a clear and good enough Software that can configure our devices report all devices status Without a human so VTS is enough VTS provides API to open stack we use VTS plugin to integrate with open stack So that every open stack network API will be translated to VTS API and send to devices which we need to configure To show how VTS works I have another picture This picture will show the how the VTS works in the most part Let's say first way create a tenant network We know that tenant create a network want to change anything. So VTS will just allocate a Low locate a VMI in the database and then response Second, we will attach a VM to the network This will change some things we are hosting for well captured by the VTS and It will send the to map the right of tour and the tour port using to project database So the spine will get this information the VTS provides the V tip V line and the spine will Configure this V tip and the V line and put it to the tour Okay, once the tour has configured The neutron open-way switch agent which has modified by VTS Well guess this message and modify the requested V line information before programming V switch Once the switch and the virtual switch are programmed the request will be respond the request about and and The next step we will show how to create a router Routing and the switch in VXLan will be explained in following slides and in this snare create a router while just Provide our three VPN our three VXLan VPN and any cast gateway All these things well is just The commands that shows you the right so on the right our network engineer can check and Whatever the VTS do and does it do right or wrong So the VTS is more clear than the ACI. That's why we choose VTS Next let us let us explain why we choose VXLan network and what is the mpbgp evapn VXLan here is a picture that originally OpenStack Neutron Neutron runs and There are three computer agents and computer node This is the message queue Neutron will put some message to the agent and the agent will configure the open-way switch okay, if out to population support and Support information will be put into the open OBS agent and Configured at open-way switch by the fdb This is how our two puff runs. There's no magic No magic here Neutron knows every network since it tells each OBS message queue and OBS agent I'll set fdb and tunnel and the RP responder But actually Neutron cannot always know everything as I mentioned above like a keypad D So we don't want to try this on our production environment The original originally VXLan has a problem that's the That's the traffic may maybe Maybe flood like a BOM or something So this is how mpbgp works I explained this in a snare that sees a DCI snare so we use the VXLan in internal DC, but This picture might Explanse more clear First there are two two VMs in DC1 has set up and they will put their they are similar JRP or something so the Routing reflector well gets the message that there is two VMs with this VNI and IP and mic and something It's same to the DC2. There are two VMs has been set up Okay, next. Okay. This is the next the Routing reflector well exchange their informations the DC1 well gets the information about H H4 and H3 4 5 and DC2's are routing reflector well get information of H1 and H2 and Then the routing reflector will send the information to the cracked to our which is the same VNI So once the H1 was send a packet First of all, it will send an ARP and the ARP will respond by the TOR switch and another and he's send another packet to the to the H4 since the L11 knows how the VNI is programmed and there is There is H4. He will just send this packet to the routing reflector in the DC2 Not not needs the routing flexion or some other thing no flat and The R in the DC2 well forward this packet to the L24 so the H4 and well gets the cracked packet and there is no flooding or no Traffic visits in this request We sometimes the table about the excellent solutions flat and alert do not need and control plan. It's easy clear No device requirements, but performance is not good and the BUM traffic broadcast unit cost and Multicast Unknown unit cost and the multicast traffic well flat to all the tips since we have many many nodes This won't be a comment There's another software solution named the bug pipe which implemented by It will install a bug pipe agent every compute nodes and they get a BGP information through BGP routing reflector or format But there is no spread production experience now as BGP and as MPBGP solution generally we have two topology to choose IBGP or eBGP For the first one all way tips well in a same BGP AS spine is RR of BGP Spine and leaf will build a neighbor relation between themselves as For eGP or leaf well in different BGP AS no need for our but much more complex to configure in the end we choose the IBGP solution for our VX LANs for our VX LAN and Use OTV as our DCI solution you may ask that why we use the VX LAN as a DCI network solution because we think that OTV is more easy to configure and OTV has some features that VX LAN do not have now like loop detection or BOM detection or something or I said above seems very happy but fact is We have done lots of work to make all these things happen. But besides there are still some troubles that I Think everyone who want to bring open stack to the financial industry well meet Firstly for the availability we want we deployed a number of open stacks not regions But really open stacks we want we don't we don't want to share a keystone or database in this open stacks so this is the This is the solution we've deployed on many open stacks and Some configures are same, but some configures are not it's pretty cold difficult to operate In such a scenario since we do not have any solution to solve There are many open stack to operate in 1 DC 2nd financial industry have some security standard, which means we need to pass some tests and it may be regularly but first password stored in play text is not a tolerated I said I have said I have say that so there is there is BP in the keystone that to talk about How we can store the password in DB and do not make it in a play text But there is no progress Besides we will meet some color bugs or open stack bugs may cause VM pose or open stack VM get to duplicate the IP This is all we meet. It's all really meet in the production environment last but not least some features are left like consistency group and This volume snapshots When we take a snapshot in open stack for now, you will get the root disk snapshots Once the VM has attached of two or three volumes, you will need to bank up the Back some of the VM bank up the volumes and if you want to Bring it back. You need to do separately. It is a pretty low-deficient and HAA is not enough for us for now Though we have implemented the Rack but there is something we do not have covered Leave work down or the hyperweather have some problems so First we want to first we want is upgrade So home phone open stack is now based on the liberty, which is the US 3 First we would like new features and the bug faces in the new version But we need to merge some patches which fix some specific bug happened in our environment Upgrade to so many nodes without to break down is tough But we don't always keep the we don't want to keep the cluster in our old version The US 4 well based on the Newton version and we want the whole cluster upgraded to us for which is Newton The next thing is the FV Performance and allows I solution is the important reason of why we choose the Cisco VTS Since software is always age or flexible easy to scale. We want to try FV solution in the next steps Performance than the I solution of open-stack. Neutron is very difficult to tune in since there are too many components in Neutron network node including Linux kernel TCP IP stacks IP tables TC opens one strong swan the VXLan model open-way switch and if you and the HEPROXY if you have our bus and the DS mask and many other things puts so many so many components together to Provide a network service is very complex. So that's why Neutron is too difficult to tune in so we have research on the FV for about one year and We will we want to put the US FV into the US 4 Here's a preview This data is tested on our lab and All in a pretty general head of hardware E5 to 2060 2600 In such a devices we get some performance that OSTPDK can take 10 mega pps in layer 2 VXLan switch with only four cores which is maybe 10 times to the native open-way switch and As for the VNF we have tested the VPP for the routing We don't want to take the VPP as the Routing for the first step in Hongfeng bank because we want to just Try some our bus or VPN in first But what we have tested first is the VPP and in our in our test VPP VNF can get a very good performance about 0.8 to 3.1 mega pps in our 3 routing the data is not very stable because When we have when we tested it has Two two data one the NDR the non-drop rate. It is about near to one one mega pps and another is PDR the partially drop Rating the data is pretty near to the FDIOS communities report All this is the pure software implemented no cell rates And no and we have tested in Cisco t-rex t-rex is another open-source software that's Cisco open open sourced to test network performance And the test is under the about a hundred of flows Okay, that's all. Thank you. Good. Ah, yes