 Welcome today and so today we're going to take you through our use case with Manila at Paddy power bet there I'm Steven Armstrong principle automation engineer Marius Pali Mario automation engineer. Hi. I'm Kapil Arora. I work at net app. So I work with these guys and As a cloud solutions architect, I help them with implementation of Manila or any questions that they have around Manila. So That's what you do. Yeah Okay, so a bit about Paddy power bet fair. So we formed from the merger in 2016 We have offices all around the world in the UK Romania, Portugal Ireland Malta, Gibraltar USA and Australia and we have an engineering blog that we Update based on the tech that we're using and we tried to post on it on a regular basis So this is bits and bits if you want to check it out and the company has over a thousand engineers and We have multiple different products such as exchange sports boots games and retail And so the the bit that actually makes us different is the amount of transactions that we do on the platform So we do around 135 million daily transactions we do around 30 billion daily API calls and generally Why we're here at the open stack summit we are building a hundred K core open stack with around two petabytes of storage So this is what our reference architecture really looks like and we've went over this in some of the other sessions But as a refresher we use our global load balancing solution with ultra DNS at the top that leads into two Tears of net scaler, so we have our external sRx firewalls And then we have our MPX where we do SSL offload in the hardware coming down into the citrix next to our SDX Which does content switching to each of our micro service applications that live and open stack And the way that we bridge these networks is using new edge networks VSG device So that means that the open stack bubble where you have the networking there can bridge out to external networks And then you've just got simple ACL rules that link that all together at Paddy-Pear about fair we use we've spined to apology and So we use Arista for that where we have a series of spine switches. They're interleaved with top-of-rack switches We have for each of our compute racks or infrastructure racks. We have two We've switches sitting top of rack configured in m lag mode And then basically the leaf spine to apology creates a BGP routing fabric and in terms of the storage We use pure storage and net app a combination of the two So net app is used for Manila which will take you through in this presentation And we also use HP one view to configure the RAID configuration on each Of our hypervisors or KVM hypervisors that we scale out And new edge networks is installed on our each of the compute nodes It's their customized version of open V switch, which is called the VRS And that controls flow data and an out-of-the-network based on ACL policies It's a completely mattered Active active data center with dark fiber between the two So what we do is we design our applications for failure. We deploy them across two DCs So if we're doing maintenance, we can take down a data center or a portion of it Or if we lost a data center in terms of failure The application and customers aren't impacted and keep it will now take you on to Manila So before we get started on the use case and how it is implemented We wanted to take some time to actually introduce Manila to you guys to talk about what exactly Manila is And what can it offer us in an open stack cloud environment? So I'm sure most of you are already aware of cinder, which is the block storage project in open stack So like cinder is for block storage Manila is for fire share services within the open stack Projects, so what can you achieve with Manila? So basically you can as a tenant say that you need an NFS share In your environment and you can say I would like this share to be accessible to This and this particular BM or a host, right? So you can have sifts NFS HDFS kind of share, which you can actually provision And with help of Manila you actually get this API layer Which you can help which which can help you to abstract the underlying store systems or NFS servers that you have Right, so that's what Manila offers you you can also create Similar way different types of shares and then the user will come in and create those shares and provide access to the shares So why would do we need Manila? basically Like any other open stack service you want to abstract different kind of storage backends for example in the case of a storage system Or different kind of hypervisors, right? And you want to provide a standardized API So you're getting the standardized API which you can use to provide self service to your Applications and the other thing is that you can add it to your automation so if you are provisioning an environment or an application which needs a File share you can add Manila API to it to actually do your provisioning kind of things and Manila offers a UI a CLI like all other open stack services and also a REST API and Underneath the Manila layer you have different kind of storage drivers Which like we have for NetApp And other storage vendors and also for example for Ceph FS So Now let's try to understand the concepts basic concepts that exist in Manila so that we have a better understanding when we actually Look at the demo or the lifecycle So in case of Cinder we have volumes, right? So but in case of Manila we have shares So it's one-to-one kind of When you think about shares, it's Manila and when you think about volumes or lunch, it's a Cinder and What is different in case of Manila is that you need to provide access to this share Which is different from from the case of Cinder, right? In Cinder you attach your block storage to your hypervisor and then it is exposed to your VM But in case of Manila This is network attached storage, right? And the VMs have direct access to the storage system so there's a direct connectivity or a storage connection between your VM and or your host and The Open stack or the store system so you need to provide access rules Which is different compared to block storage and then comes also a concept of share network on which share is this particular On which network is this particular share available, right? So these are three concepts We have four more and you can also have security services because it is a network attached storage you can integrate it with LDAP or Kerberos or Active Directory you can take snapshots of your shares, right and Sometimes people people get confused between backends and drivers so back end would be actually any storage system That can actually provision these shares and a driver would be an implementation from a storage vendor like net app or Implementation of CepFS or something like that, right? So that's the driver which actually implements the APIs and the back end is actually the storage provider So these are the basic concepts that exist in Manila Now if we look at the contributions to this project since it was actually started So you can see this is a snapshot from stack analytics so I'm just trying to show that there is a lot of work that is already going on and We have lots of vendors contributing to it. So you can see that net app is the leading contributor to this project I didn't mention so far that actually net app founded this project in the open stack community. We started a Manila project And we have been doing a lot of work in In this project and we have lots of developers doing continuously a lot of development work in this project alright so as I mentioned before if you understand Cinder the architecture that we have for Manila is also very very similar you have a scheduler like any other Any other open stack servers and the role of the scheduler is actually to figure out If a request comes in for a share where to place this request So that's the job of the scheduler as part of the sender Sorry Manila architecture and that's also this case in case of sender. So very similar concept of Architecture and it also has a mySQL or SQL database Which you can which you use to actually store the information about your shares So all the shares that are provisioned all the information about these shares stored in the SQL database You you get lots of requests from the API server all all these requests are queued in in the in the Rabin MQ Or whatever queue implementation you're using and then you have different drivers, right? So for every driver you have a share Manila share API Manila share process running and If you have different kinds of drivers, you can use all of them together and offer abstract the the different Kinds of storage back ends that you have so very similar to how sender is in architecture Now many people also get confused as to Is Manila sitting in between access to my share? What happens if Manila service goes down? Do I lose access to my share? So it is important to understand that Manila is an orchestrator So Manila is actually going to help you provision these shares It is going to help you or kiss abstract the underlying layers of different storage systems But it is actually just doing orchestration So you can see the red lines in this diagram is the control path So basically commands or APIs are running between each other and these components are talking to each other But there's no data access involved in in this case The data is actually directly accessed from the client to the storage system So the client and the storage system talk to each other directly and the control path is totally separate So in this case if a request comes from the horizon UI For a new share So the Manila Manila is actually going to send the request to the driver Driver is actually going to talk to the storage system in this case net app on tap system And it is going to provision the share and then the user is going to create another API or run another API and say allow access to this particular share and The driver will actually put these Rules access rules on to the storage system to allow the access and once that is performed Manila is out of the picture kind of and data is directly accessed between the store system and the VM So we have Cinder we have block storage. We have object storage in Swift Where does Manila fit in right? Many people can may ask. Okay. Why do we need another kind of storage? Why do we need file share service? Where do they fit in? What are the use cases for it? So generally if you see Cinder blocks or block storage has more manager management required to it And if we look at object storage, you need very very little management from a storage point of view And Manila actually sits in between these two you need less management of this The file shares or the storage system, but you get more usability So the user actually can easily access shares and mount shares and do things with the share So what are the use cases basically big data? So if you want to provision HDFS shares, you you would not probably want to do Using Cinder right you can but maybe Manila can offer you a better solution in this case Many databases like Especially in case of net app we see lots of Oracle databases running on NFS for us So that is a use case that many customers who are moving their traditional kind of workloads or enterprise kind of workloads into Into the OpenStack cloud for example, they can leverage this and still keep their architectures of their application the same way that it was before Then we have Other applications traditional applications like SAP so SAP systems for example always need a file share to to store shared files to show Store binaries and if we envision that these traditional applications should also run Eventually on OpenStack and they are actually already running for example We had a talk with SAP as well last year and they showcase that how they are using Manila in their enterprise applications So enterprise applications legacy applications are also important and then we also get You know this Snapshotting cloning capability which can plug into your CI CD systems and also help you reduce your build times and the likes So these are some of the use cases and now we will actually focus on paddy power bed fairs use case Why they actually chose Manila and what kind of applications they are running on the right-hand side We also have different CLIs just to give you an idea of what all can you do with Manila? You can create Manila shares you can delete shares you can provide access to the shares You can create snapshots you can create List the snapshots and you can create different types So like in Cinder we have different volume types You can create different share types in Manila using different back-ends or storage types So that was kind of an overview of Manila what Manila can provide you and what we can do with Manila and now Stephen will actually show you the use case that bed paddy power bed fair has Over to you. Okay. Thank you So our main requirements with Manila are really to Provision NFS shares on demand and programmatically control them through the OpenStack API's and One of the things that we had was we we have syslog shares that were generally created by An external storage vendor where a ticket was raised and the spreadsheet was was filled in to do that So what we really wanted to do was to eradicate that completely from the platform and make itself service for our developers and Our use cases were really to allow developers to self-serve and allow them to extend their shares programmatically We also needed for active passive applications to be able to replicate shares between data centers And so that's something a feature that we would really like to to see built in for multi region Workloads so if we've got our OpenStack in our first DC And then we need to synchronize data across to the second region and another DC Being able to support that would be good for us Currently we have to go down to the NetApp level to do that and use the NetApp API's It would be good in the future to be able to do that at Manila level so over to Marius, so basically we had Deployment of Manila and OSP7 so you can take us through how we deployed that so yeah at the time we have done this setup OSP7 with kilo Manila was a technology preview. So for the installation we used a script In this diagram we can see the installation topology. So we have Cluster data on tap Which is a HA pair and on top of this we have SVM and We use the management interface and the data for mounting our volumes and All the services are Manila services are running on our Controller So we have Manila API, Manila share and Manila scheduler With OSP10 Would be will be more simple to install Manila because this come directly with an over cloud template so We will have a specific heat template for every services like Manila API, Manila scheduler and Manila share and obviously for the Manila backend and the NetApp and for doing this you only have to run the over cloud deploy and use the Template for this. Okay, so in terms of building this into our self-service pipelines and we use a notion of 12-factor applications where we keep the Operating system level completely immutable and disposable so Every single deployment will blow away virtual machines spin up brand new ones and then based on the principle all data will reside on a tax storage in this instance Manila and We also have it where we don't keep virtual machines for longer than 30 days on the platform So if the team hasn't done a redeployment in that time, we will basically get them to trigger it So all our patching is done at the start of the pipeline process where we produce a new base image CentOS 6, CentOS 7 or Windows 2012 R2 if you're feeling lucky and Generally that allows us to patch everything and then produce those images using packer upload them to glance So Manila really is going to be used to mount different shares in this in this process and teams will use a self-service ansible yaml file fill in what share information that they need and Use it to provision NFS shares in the pipeline So an example of some of the applications that we're currently deploying and we're quite early in our use cases So we've been trying this out and some of our our tooling applications So Jenkins as you know is a file system So it makes sense to deploy it on NFS and use it for this use case We also have a use case for ThoughtWorks Go, which we use for our deployment pipelines and We treat all of our tooling the same way as we do customer-facing applications. So they all have a self-service pipeline Basically to deploy it through and to test environments and production For the ThoughtWorks Go use case all of our ThoughtWorks Go agent reside on NFS and then they access it So when we're doing deployments, it pulls down all the files on to the ThoughtWorks Go agent And then that's shared between the different agents because that speeds up the deployment process GFrog Artifactory as well has a NFS requirement. So we provision Manila shares for that as well. And we also have some of our customer-facing applications This is our Cedars that our traders use. So essentially that has XML files that reside on shared storage. So doing something like block storage for that just wouldn't make sense So this is how it fits into our Ansible self-service inventory file We have our VM naming standard. So this specifies that we want two virtual machines per DC and the other thing that we do is we create the flavor and Put in the OS image. So the option to your sent us six seven windows 2012 or two if you're doing windows And we specify the flavor with the vCPU RAM and disk space also here is At the bottom with the host aggregates. We essentially In the line item with the virtual machines We put the particular host that those virtual machines will land on. This is for disaster recovery purposes So essentially if you lose a hypervisor, it only takes down a percentage of that application So how does Manila fit into this also? We have a run list and that says the application to install in those virtual machines Then with Manila we basically specify the NFS share type and the particular volume. So for instance and mount points So then Once the development teams have specified this cell service file, they check that into to get lab and then the Pipelines ready to deploy. So the next time that they do a release They will increment their particular RPM version and then that will trigger off the cell service workflow where get prerequisites will pull down all of the Ansible playbooks that will be used to deploy the applications and any Necessary Ansible roles to actually install that up the second stage based on inventory fail It will create a flavor it will then assemble the host aggregate Dynamically based on what hypervisors you're specified there The way that this works is we tag each flavor with particular metadata And we also tag the host aggregate with metadata as well if those two metadata tags match The Nova extra specs filter will place it on those particular hosts So we then check capacity Against the hypervisor and we also check it against the net app to make sure that we have enough capacity to do the deployment Because you don't want a broken deployment and to max out the disk We then create the network. So this creates the zone for that the micro service application in new edge and the subnet which is mapped one-to-one between Open stack and new edge We then launch the virtual machines on to those particular hypervisors that were specified we tag those those machines with metadata Which says the profile of application that they'll install on it? So at the next stage when it comes to run Ansible It will read that metadata tag and then install the particular application based on the metadata So every step in this is just a playbook in Ansible So it's modular and can be reused Then we create the VIP Against the the next scalar. That's also using Ansible modules and then we do a rolling update So what we do is we create the manila share at this point ready to be mounted to the particular application we then mount the VMs to the particular Manila share and Then we serve live traffic on the load balancer so this is the first deployment and then we test and The application to make sure it's good and then we clean up the previous version and promote it to the next stage So this is and this will go through quality assurance Integration environment performance testing and then production. So it just goes through that same pipeline step So the second deployment comes in We set up the flavor and Host aggregate so if there's been any changes to the spec of the flavor brand new ones created and the new boxes will be created with that new profile We then check the capacity To make sure that we have enough again. We create the B network So we have completely immutable networks here. It applies ACL policies. So we don't do in place updates We then launched the new virtual machines for the B deployment We install the application on it We create the VIP or Modules are completely idempotent meaning that if the state of the VIP hasn't changed no change will be made So it just skips that stage and then we mount the virtual machines to Manila Obviously, this is different from block storage so you can mount have multiple mount points We then switch over the traffic to the new mount and then we will test the application And then we clean up the previous version and then that will alternate between the two each deployment So everything that we do is completely immutable in terms of all the components and opens that The only thing that lives on is the Manila share with the data on it Okay, and now we're at the point of nice so for the automation of of this we have some Custom Ansible modules that our developer Mario Santos have done and in this diagram We can see the flow of automation so We will have our pipeline that will cut will Run the Ansible modules then we'll go through the rest API and create the module and then Provide access to the VM for The specific share now we're gonna. Can you switch it can live demo? No pressure matters. So it's not switched. Can you switch the? Presentation I think they were sleeping. We're good now Thanks So for this demo we have an application which is called Boston and As Stephen said We have our inventory file and here inside we have specified our volume, which is a Netapp share type one NFS share and Like mount point will be Jenkins so For this demo we choose to deploy Jenkins master and attach the home Directory of the Jenkins to this mount point Jenkins so inside horizon we can see that we have our a be deployment and Our actual Vm on which is running the Jenkins master is the B box So we're going over the rolling up the stage at this point and the pipeline Here we can see that we have our volume which is mount on Jenkins and for proving the persistence of the share I will create a job which I will call Hello Boston You've got blue balls Green balls are back for that green. Are they green? I'm colorblind. Oh, yes. It doesn't make any difference to me Another cream. So I will create another one Hello open stack So we are just trying to generate data. Yeah within Jenkins So if you go we go here great responsibility We can see our jobs and if I do a manila list on our controller We can see our Manila Share which is that attest to zero zero one and now I will run our rolling update playbook which will do step-by-step The rolling update to ensure the playbook. So here's the playbook Here is how we create our manila share so based on inventory we We get all the specification So that's just pulling it back from that self-service inventory fail, which I showed you earlier Then once the share is created we provide Access on this share then we mount the NFS share On our VMs Then we will check the SSH connection before stopping the application on the old boxes then we amount the NFS share from the old boxes and We change the default config of Jenkins With our mount slash Jenkins we start the application on the new boxes and For the demo propose we use CNAME Normally in production. We have a lot balancer so based on The CNAME and the AB deployment we flip the CNAME and In the end we are checking that the CNAME was changed So now we're just about to run the self-service playbook to show you all those steps in action on Mattius tapes I'll look I don't blame you for not taping that out Can you see? Yep It would have been magical otherwise, right? Fingers crossed So just to show you Here we have the Volume mounted on our B box and I will SSH on our A deployment As we can see we don't have the Jenkins mounted So we generate the CNAME We then create the mount We provide share access on the mount We check the SSH connection We stop the application on our old box Which is the B deployment So that's a specific to Jenkins because you can't have to access in the C Then we amount share Then we change the location to the mount point We start the application and we flip the CNAME to the A box As we can see we still have the same content From the same NFS share Woo Can we flip back? So the benefits of using this continuous delivery workflow and for Stuff like Manila block storage Etc as we do around a thousand code deployments a day and on the platform So every time someone checks in to source control It triggers a deployment pipeline that goes all the way through to production and generates a release candidate This means that we have quite a big churn of virtual machines on the platform So we spin up around three thousand virtual machines a day and For test and production. So that's our development teams and Basically innovating on the platform and creating new products that we want to deliver to market One of the main features of the continuous delivery process is its word or mean time to recover from failure So each of those pipeline stages that you see is actually linked up to a slack channel So if any of the common Workflow actions fail we can basically get a notification and see why it's failed to see whether it's developer issues Where they filled in the files wrongly or it's an actual error that we see and and This also gives us a completely traceable deployment lifecycle for applications because we use those thought works go templates and We deploy applications for all our app of our micro services in an identical way This means it's completely repeatable as well and At the moment the scale of the implementation We're running just over 144 compute nodes today. So we're serving around 25% of our Production workloads on open stack. We're going to we're still on boarding and in the migration phase so that will increase exponentially and We're on board in more and more applications each week and That only does to a hundred thousand core Open stack with around two petabytes of storage that completes our presentation and demo If you have any questions, let us know we have a white paper on our reference Architecture if you want to download it and have a look at what we've done We set this up because we really wanted to help other users who want to go on a similar journey in terms of continuous delivery and Setting up a open stack private cloud and you can see some of the decisions that we've made in there as well and Another thing to mention before we get to questions Nokia have a book signing Just after this we're they're giving away my book and I have to sign it. It's all there in Madison so you could get free copy if you come down afterwards and Any questions? I just also wanted to add that this is just we are getting started on this And then we also plan to add more features that Manila offers like replication and share migration and also add this Into this framework that we are developing Just one question on the choice of net up But do you consider any other back ends other than net up for Manila and not really and generally the storage solutions that we had on the project were already used and fairly thoroughly and So we used pure storage in net up. So we we didn't look at any others The one thing I would say is the driver for Manila was much more mature with net up and fully featured than other Others and then we were just looking to utilize open stack to do everything programmatically So it just made sense plugging in what we we already had You have been using for creating the file shares in Manila But what is the other storage that you guys use for what type of workloads? I think the other one is pure. Yeah, we use pure storage for block and so we use it for our databases, but you could also we also use net up for our databases and so Generally, it's down to the choice of the team. What what they use and based in the throughput requirement You work for pure No, so we we use our local desk solution on the hyper visors So as I said before with 12 factor apps, it's data that will reside on the Block storage or the Manila share So generally we keep our virtual machines immutable so it's not deploying it to a centralized storage solution So the local desk disk of the operating system doesn't set on pure storage or net up I was only joking about the pure story Okay, thank you very much everyone