 So, Alexi, can you still hear us? Yes, I can. My name is Kieran Sambarton. And this is the case study session on the British Council. We took them through a multi-site to contain as migration recently. We did that with our partner, Rixas. So that's what we're going to talk about today. A bit of background. We met the British Council at a breakfast event we did in early last year. So we had 40 prospects come to the Ivy restaurant in London with another partner. We talked to them about the pass, platform, what other customers were experiencing, how it was improving the development process, consistency to the environment, automated blocks of DevOps processes and continuous delivery and how all that rolled into continuous delivery. And the British Council seemed fairly impressed with that. They subsequently released an RFP for a container-based pass, which Rixas bid. Rixas were an incumbent supplier of infrastructure and support services to the British Council. So we teamed up with Rixas and we bid platform and we won. And we then promptly started the migration work. Very quickly. But then the first three months we'd gone through the planning exercise and started migrating sites from this Drupal multi-site, 130 countries on two containers on platform. So, who have we got speaking today? There's Nick Morgalla, who actually is being backed up by Alexi, who we've got on the phone. Nick was meant to be here. He's got laryngitis, so he's not turned up. But we've got this phone connection and we've got Alexi, hopefully standing in for Nick. And then we've got Mike Carter, who's the technical director for Rixas. And he's going to talk about some of the project work and the things they did and getting this thing implemented in conjunction with the British Council team. And we've got Rob Douglas, who some of you may know. He's our VP of customer experience. And he's going to mop up towards the end and talk about next steps and what could be and road map and exciting things like that. So with that, I think I'll hand over to Nick and or Alexi. OK, thanks for that. I don't think Nick is with us because, as you mentioned, he's got laryngitis. So Nick, just to get some context, Nick, is our head of ops. And these are actually his slides, which I'm going to be talking to. And I look after the development team and the product team in the digital department of the British Council. And a bit of background, the British Council. If you go to Nick's slide, he's basically who we are. He was the British Council's founder in 1934 and basically we're here to promote social, cultural, educational opportunities between people of the UK and the wider world, essentially. If we pop to Nick's slide, the work we do is... I can't control this, can you? Nick's slide. We do three main areas of work. We work in English learning, and that includes providing courses and examinations and teaching materials and also uses expertise in education to help transform national educational systems around the world. And we also work in the arts, promoting UK arts internationally and also encouraging arts projects and artists around the world. So it's a very international business we're in. And if you skip to the next slide, we're present in over 100 countries. I think it's about 116 country sites we're in. So just to give you a sense of the scale of what we needed to achieve. Most of our sites offer content in languages other than English as well. I think we support, it says more than 40, I think it's about 50 languages that we're also supporting across our different sites. So in addition to what we call our country sites, which are basically promoting those activities or raising awareness of those activities, I spoke about previously across all those 100 and something countries, we also create what we call these white label sites, which are effectively a site using our CMS, which isn't branded in the same way and might have a particular reason for being... A recent example has been Shakespeare's 111, which was about bigger white label sites. And that was, as you probably can guess, celebrated in 400 years since Shakespeare's death. The pill is Drill 7, that we are looking at in Drill 8. The team here, it's an agile team and we deploy pretty much every week, regularly every week we put out deployment. And presently our focus is with e-commerce and building in e-commerce functionality into those websites. Next slide. So what keeps Nick awake really? I guess first thing is around security and data protection legal. It's ensuring that the sites are secure. Obviously we don't need to, obviously avoid all the obvious risks around that. And also we have data protection obligations around the European Union expectations and rules around that. So we also need to be hosting within the EU or at least the European Economic Area. We have an expectation that all our sites are going to load in under two seconds. And also we need to minimise any downtime, particularly around deploy and any other kind of central activity. And effectively essentially it's going to keep the development pipeline flowing and not distract our teams from just giving the stories out the door. So if we skip to the next one, the reasons why we were looking to change was our previous hosting setup was a traditional data centre model. It was basically one large cluster that served all our sites. One dribble bits instance that served all our sites. And that probably wasn't working to us working for us what we wanted it to. We needed to improve performance and liability. Our employees were killing us when our downtime was in the hours when we were putting out deployers each week. So it was a huge overhead in terms of how much downtime we had. And also we had a lot of effort being put into trying to fix deploy issues as opposed to letting the developers get on and doing what we needed them to do. So I'll say that was another wanted cost for the business in that area too. So if we can get the next slide on that one, please. When we came to look at what we should do next, the number of considerations and requirements, we were looking for an SME service provider ideally hosted on a procurement by G Cloud. That was something that I was encouraged to us by one of the UK government as well. And we saw that as a... I also wanted to look at using a container-based model for hosting our sites. We felt that that would help us achieve high availability in a predictable environment and improve our reliability. And also was aligned better with our own development workflows and processes. We need to integrate with Git. And we also had a view to adding the commerce components as we mentioned. So tell us obviously a requirement. Service management, we obviously needed that to work with our internal processes and also to conform with our security and data protection requirements, as mentioned. OK, so if we get the next slide. Great. So what we've come up with, in the workflow you can see on the left there, the way our repo and our hosting instances and testing environments all work together. Basically, this fit very well with the workflow. If we didn't have this, we probably wanted to have. It simply enabled more multi-branch testing which we were lacking beforehand that we really wanted and something now that we really depend on quite heavily. So in terms of a structural process, it really suited us the way that we've got things working. So we're very happy with that. Also as you can see on the right there's a little image there around Akamai effectively being our CDN. The platform.org.sh components integrated really well with Akamai, so that all worked. And the image on the bottom right basically goes to illustrate how effectively each one of our sites is a container, is a project on the platform environment. And we have different types of sites, as I mentioned. The site.org is our UK-facing site. We have our white label sites. Each can share components between them as required or be discreet as needed. So that gives us some way of how things have been set up on a site basis, on a container basis. If we go to the next slide on migration, the project was very rapid. I understand it's a very rapid project. It took us three months from the beginning to the end of the migration to move over the 100-plus sites we had. Work began a bit before obviously doing some proof of concept on establishing what that process would be. Coming up with issues before we went to the real migration and it's really shaking out any problems. But when we did get going, it all was done within three months. And the process was very collaborative. And as it mentioned here, we made great use of online tools. They were quite disparate geographically. Certainly within ourselves at British Council, but also IXIS and the pattern dot and sage was spread around the globe. So that was no barrier as well. We worked really well in that sense. And we delivered it within budget. And in September of 2015 it's in here. So that was quite a burn to get that done as per the plan in what I feel was very rapid migration. To the next slide, the results. So yes, we've seen immediate improvements to be honest. Cypher performance, 30-40% reduction in load times. The downtime for the ploys was slashed, frankly. You can see that in 46 hours down to one and a half, including the testing, just been a massive improvement for our department and also for the wider business. And the improvement has definitely improved. Sorry, the liability has definitely improved as well. And as mentioned, our developers can go back to developing as opposed to trying to troubleshoot issues as deploys and the like. Next slide. I can't guess. So all in all, not only were we pleased, but this is the project, the migration, the planning of it all and the solution itself was actually won the real IT award. So I can't exactly remember what that category was and how the next one was there. But yes, it was definitely recognised by the industry as well. So that was very hard to see. I think that's pretty much the slides that I had prepared. Right, great. Appreciate that. And appreciate you for dialing in. Thanks very much. Please continue. OK. As Ciaran said, I'm Mike Carter from IXIS and Technical Director. I'm going to give a bit more of a background of the technical achievements that we wrapped around platform services to meet the British Council's requirements. I want to cover a bit of how we rearchitectured with the British Council's digital team their git repositories to fit with the way platform worked and remove them from being a single multi-site. I'm going to cover a bit about the challenge of deploying 116 and now 125 sites all at a single click. A bit about how we achieved a good way of doing backups and data synchronisation and of course I'll cover a bit of the automation that we did using the platform API including a bit about how we're doing monitoring and then the service that we provide every month with the British Council as IXIS. So as was mentioned slightly earlier the original way that the British Council's development team had structured their project was based on a single Drupal installation with every country as a separate multi-site installation. The downside of that was that every time one of the country sites needed to be amended and deployed it had to deploy every single country site and then run things like features, reverts cash clears for every single site which resulted in a huge delay for around four hours I think every time they wanted to deploy just a single change to a site it was all or nothing. As part of the migration process we worked with the British Council team to split up the single Git repository moved a lot of the code into an installation profile which they codenamed Solas and then each of the individual countries that were under the original multi-site were moved into individual Git repositories that lived on the platform system and then every time we commit to those smaller repositories it would redeploy and build the sites. As part of this we then created Drushmake file for every country this was based off a templated make file that we can pull in so that it's easy to update across all the 116 sites should a change be needed. So deployments British Council said they needed a robust, reliable and non disruptive one click deployment process about hundreds of sites doing the traditional way of deploying one at a time isn't really acceptable and as Alexey mentioned they do a deployment every Monday usually without fail and they need it to work so part of the deployment process also involved deploying any changes to the platform app configuration files so we had to come up with a way of managing these from a central place and then deploying them out to every individual country repository to basically remove a lot of the headache and time wasted for the support team. To solve a lot of the requirements we built a dashboard using a bit of software called Rundeck that provided a web interface for the exist support team and also the developers at British Council so that they could pick either a single site to deploy or a group of projects or all of them at once and deploy it to a production part of this was involving a lot of custom scripts that were written they combined a lot of the boiler template configuration files, the platform app I mentioned also the make file a root YAML file which was a big task of getting all the redirects in and a change log file so every time they deploy a site we're combining all these files together and putting them into each repository for a country the reason being is that we have to commit something to the platforms country repository so that it will trigger a deployment we use platforms variable system as well to store custom bits of information for each project and then during the deployment and the build process we pull out the variable information we're currently using that for the size, the storage we pull that out and then insert it into the generic platform app YAML file it made it a lot easier for us to keep the platform file with the other things like PHP versions and whitelist generic and easy to maintain this reduced as part of the deployment process as well we started off by doing the deployments one after another and we soon found out that took a long long time to get through 116 sites so we started 10am on a Monday it was a good few hours past lunchtime so what we did was take all of the projects that were part of the group and split them up into batches of I think it was either 6 or 10 and we used a parallel execution process that we could run lots of the deployments in parallel and get through the deployments a lot quicker so we reduced it from so they're four hours down to 35 minutes and that was to get all the sites done but not taking any of the sites offline and it rippled through the batches of 10 at a time up to the current 126 I think we've got at the moment similar requirement similar things we had to do was the backups they required British Council required daily backups to be taken across all 116 sites as well as backing up the data they needed to synchronise the data back from the production environments to their testing and QA staging environments as well so the same content that the editors were creating would be available for running test against when we first started out we were doing serial one after another backups which took 8 hours to complete on the original 116 sites just because the different size storage database sizes sometimes it took longer to finish we also had the data syncing from the production environment and we created an intermediary environment so we could then not keep the production site locked for long that it needs to be and then we synced data from the intermediary ones to the QA environment and to the staging environment in the background leaving the production sites to carry on working as we did with the deployment we implemented a parallel batch process I think we used GNU something semaphore tool I think it was and that allowed us to run after a bit of testing I think we got it to the sweet spot for about 6 jobs at a time to run the backups and this decreased from the 8 hours backup and synchronisation down to a much nicer 3 hours at the end of the 3 hours we then ran some scripts which used the API to check to make sure all the backups had finished and date stamped which brings me on to the platform and automation bits of work we did these are lots of glue scripts and bits to do other useful housekeeping on the platform as I mentioned we used RONDEC to execute a lot of these scripts and we provided separate buttons and interfaces to run these scripts and get the output so that different people would be able to monitor the platform as I mentioned parallel processing that could be calling the platform CLI tool to do the data backups and the data synchronisations we also used the previously mentioned platform variables process again for storing other configuration data and then injecting it into the configuration files we also created a way to group the projects I think in one of the diagrams that Alexi showed earlier they've got the white label sites the main.org site and then the 116 country sites so we've created a way where you can group them using the platform variables data so that we can then just target the deployment of only certain types of sites or a subset of the sites which we're eventually going to be looking to do different time zone dependent deployments so that we're never going to be blocking different countries during their daytime one of the last things we were doing with the API was setting up a way of monitoring so the British Council had SLAs we had to meet and we needed to prove that we were meeting them externally so they were also monitoring and we had to provide our own monitoring for this we opted for a tool called NodePing which is similar to Pingdom but was a lot more cost effective at large numbers of sites they had a great API so one of our other tasks was to take all the data from NodePing combine that with the platform API's data about each project and the project size is standard large, mediums and then we combine all that data together to produce a number of reports that were accessible through the dashboard this was allowing us to see if some sites that were maybe on a medium size platform were performing bad and needed to be upgraded also allowed us to see things like availability of sites if any of the sites were going down maybe from a traffic spike from promotion or marketing we also produced some reports that did just a standard audit of all the projects the total environments that each project got the disk size for each one and a few other bits of data that we could run that and put that into the service review reports which we offered as part of the wraparound service that the British Council required that we had to fit into their existing support workflows and be their single point of contact so this is one of IXIS's main services we provided an ITIL-led service desk for the dedicated team we integrated into the British Council's global service desk the service now software and we became a resolver group for all the sole less support issues whether that was application or infrastructure and then we triaged those as needed through to platform support service this included change, incident problem and capacity management issues things like release management on a Monday morning as well as I mentioned every month we do service reporting and service reviews with the client and a lot of the data that we pulled out of the APIs was then put into the service reports prettied up with graphs made easier to digest and part of the monitoring integration also included a bit of pager duty integration for out of hours alerting should any of the sites go down for longer than a few minutes and that is my technical coverage so then I'll pick it up from there I'm Robert Douglas from Platform and I wanted to explain just a couple of the concepts that Mike introduced from the platform aside and then go on to show you some of the things that we'd expect the British Council to benefit from in the near future some new platform developments so one I wanted to explain just the concept of deployment on platform altogether because it means different things depending on what context you're talking in if you're doing a multi-site deployment as British Council were before then it's basically replacing the code and then going through all of the existing sites that you've got on there and running the update scripts and one of the inherent problems of doing that is that you've got a quite long stretch of time where the consistency between the code and the databases of these various sites is not good so if you need to do a database schema update to install a new module for example or maybe a module is adding an index or a new column in the database then the 136th site has to wait for all of the previous sites to do their updates before it does its update it's running code that is not working with the database so there's a disparity that could be either completely invisible depending on what the changes or it could be catastrophic if the code is telling the site to look into a column that doesn't exist that's a fatal error on the site the entire time and there was basically no way to get around that so that was one of the original impetus to moving off of multi-site and splitting the sites into individual sites so when IXIS then took that over and put it onto platform they took advantage of a couple of nice tricks that platform can do first of all platform didn't expect necessarily that all of the code would be put into the Git repository but rather platform looks first of all to see if there's anything like a project make file for Drush make if it is, if there is one then platform builds that so there's a phase where what we call the build phase where it's looking at the code that you commit and it's trying to figure out how you want to build your site whether it's a composer JSON type of build or in other technologies like npm for node or pip or ruby gems it gets all the code that you want including a Drush make builds that and then it moves it into place and then you can do things like your updates so IXIS took advantage of the project make build strategy and they were able then to instead of when they want to update a module instead of putting the code for that module into a repository and making sure all the files are updated in any old files that don't exist are taken out they just update the number in the Drush make file from one minor point release to another so they would literally say views module 6.2 to 6.3 and that automatically has the consequence of updating views on all 136 of their sites because they're all being built by Drush make and the same would work with composer so that's one thing that I wanted to explain another thing that Mike talked about that might have been foreign to you if you're not familiar with platform is the concept of a platform app YAML file so YAML of course is just the format that it's in but what does a platform app YAML file do it describes an application on platform platform has the concept that for any given project you can have many applications imagine a Drupal site that has a magento e-commerce site built into it and maybe a Node.js application for chat all working together is one website platform can do all three of those things in one package but each one of those different applications needs a platform app YAML file that describes the individual application what does it describe it describes what language it's in so PHP what version of that language you need PHP 5.6, 5.4, 7 hip-hop VM it describes things like what writable mount points the web server needs so Drupal needs temp files, private files, public files things like that and some website configuration you describe the build and deploy scripts that you need to run and it's a very powerful file that shapes the application so what XIS did was find a way to manage 136 of those that have light variations on each other depending on the exact individual site but which have a lot of commonality between them they are all running the same PHP version they all have the same mount points and things like that so that was one of the magic tasks that XIS figured out okay so those were the two concepts that I wanted to explain from Mike's bit just in terms of what that means on the platform side and now I want to tell you a few of the things where I think that if we were to do this case study again next year you would be able to see some of the advances that the British Council have been able to make based on new things that are coming on platform oh I didn't capture your bullet sorry I thought you were going to just free talk this oh you didn't actually get it you did loads did you oh can we get them to the Google doc then sorry sorry my mistake surprise yeah great super no worries is it windows or is it internet right so well that's loading just out of curiosity who here has actually used platform before in any context great so most of you are new that's cool we're doing demos downstairs on the trade floor so if you have questions about platform after this then please come down and do get a demo there's a lot that we could talk about that I'm not mentioning here okay load any time now well I'll jump ahead I know my slides well enough that I can speak without them oh they're coming yeah has nothing to do with windows this time so one of the first things that I expect and I've confirmed this with Mike is that the British Council will eventually take care take advantage of PHP 7 platform was in fact one of the first platform as a services overall to offer PHP 7 to anybody who wanted it those are yours there we go where does it I can barely read this cancel wow that was painful but here we are sorry about that thing hanging over on the side which is not going to worry about that okay so back up a couple things that I think the British Council will benefit from will be the move to Drupal 8 so they've already planned that but aside from the known benefits of moving to Drupal 8 there's some really cool things you can do with platform which are pretty exciting first and foremost they'll be able to build Drupal 8 with Composer so platform is able to take a ComposerJSON file and on the basis of that build your Drupal site and that's really nice if you're not a developer and if you've not worked with Composer directly it means that when I want to install a new module for example I simply type Composer require views module then I commit that file that generated the Composer file to get and platform builds Drupal with the views module just because I've done that it's brilliantly simple platform can also help you build that locally so that it's an easy experience on your laptop to do the development that you need to do on that so I just included a little bit of ComposerJSON from one of platform's starting points for Drupal 8 so you could see what that would look like so there's core coming in at Drupal 8 and Drush coming in and the Drupal console tool coming in stuff that developers would really want to have and extending that's also then very easy another thing that isn't necessarily strictly for platform specific but that we when we work with customers in this case but using Drupal 8 we really promote the value of using Drupal's cache tags along with a CDN that is compliant with cache tags so that makes it so that you can invalidate cache in a very granular way such as a specific view that you want to cache or a block or all of the content that's been authored by a specific user or content that's been authored on a specific date you can invalidate that cache without invalidating the cache for the whole site and that makes it much more efficient than to rebuild just that cache and it goes very fast on the Fastly CDN and the combination of Drupal 8 platform and Fastly in that case is really quite excellent so there will be some nice things performance wise wow okay good another thing that the British Council is really looking forward to and they've basically been chomping at the bit for us to deploy this so they can use it I get sometimes daily questions is it ready yet? I see some people smiling in the audience because they know they're the guilty ones asking the question and we're saying okay it's coming soon but it's actually now really almost there and that is platforms always had a high availability website we call that our enterprise plan but that was totally overkill to do 136 times for British Council the cost would have been enormous our list price for those is like 8,000 a year per site so that would have just knocked their budget out of the universe what we're going to soon be able to do is offer high availability sites with redundancies at a much lower price range and that's a very exciting development for us that means that even if you're running a smaller site you'll still be able to choose the redundancy model that you have for example we work with several redundancy models our favorite is master master master where you've got databases or solar search or caches that have three redundancies that sums back to something from computer science called the cap theorem about making sure that your data is safe if something fails and you need three copies of that to do it and we adhere to that but we'll also be able to do master slave in some of the more lower cost more traditional ways of replicating and that'll give people a lot of options so that they can really choose their level of redundancy and the price point that they're willing to pay at a very granular level and everybody will be able to meet their needs very specifically and we'll be rolling that out to the British Council site within the next months in a way that they can start utilizing that right there there's the one about PHP 7 that I was talking about so platform has very long been able to run PHP 7 also hip hop VM virtual machine for PHP that Facebook runs which is why Facebook can run PHP and be really fast we support both of those and the British Council is currently on 5.4 PHP so they're going to get a big boost just by testing out and switching to PHP 7 on platform that's really easy for them they will literally update one line in their platform app YAML file and then roll that out to all 136 sites when they redeploy with that change in the app YAML file it switches from PHP 5.4 to PHP 5.7 on the deploy and if it doesn't work for some reason and they discover at the last minute an error that they've never seen before they can change it back and roll it back because the concept of platform is to manage the code the data and the infrastructure all together and you can roll back your infrastructure the way that you'd roll back your code another pain point that not only the British Council have been voicing but also a lot of our other customers is that they're simply not enough customer-facing monitoring now we monitor everything internally but we've we've taken a long time rolling out a real robust customer-facing monitoring and we're attacking that from two fronts and I know that as a customer they're looking forward to that we're going to deploy either new relic or something like it in the near future to every customer and we're also building our own metrics and analytics and logging framework that will be API-driven and customer-facing so that would integrate with external tools as well as giving you a lot of the metrics that are currently missing you can get data out of platform but it's sometimes harder than customers were expecting but we completely acknowledge that and are addressing that and British Council is really looking forward to that feature as well or at least IXISs because they're the ones doing the operations for it that would be very helpful for IXIS and a lot of the reporting that they're doing that they're on the line to do that they're required to do will also become simpler for them at that point and we're really happy to see British Council moving into e-commerce and we're working with them making a plan for those sites and their current needs are wholly met with Drupal but it's worth mentioning that should their future needs for technology move beyond Drupal or even beyond PHP applications then those are all supported completely on platform and they don't have to worry about being locked into a CMS technology or a programming language technology just based on their platform as a service choice so they'd be able to run other PHP applications like Symphony or Magento or if they needed to run Node.js, Ruby, Python applications, those are available as well and we see a lot of customers other customers who take advantage of that diversity to put together really complex microservice oriented applications that are very modern where the deployment and DevOps regime for each application is the same because they're all standardized then on platform so that was everything that I have to show how much time do we have oh well then we hopefully you've got just a ton of questions who's first? there's a microphone over there if we can either pass it around or walk right up to it and blurt it out that has the great advantage for the people who watch the session later that it's recorded if not then I'll just repeat the question and it'll go through my microphone this isn't going to move oh okay right who wants to be first yes sir the one to put you in a spot is your partner spot so to frame it in another way how would the platform message help you overcome do you want to repeat the question? sure so the long and the short of the question was how did platform help you how did platform hinder you? go ahead answer frankly I'll get you later I've got the mic here we provided the hosting previously for the British Council as Alexey mentioned it was a more of a traditional setup and when they came out with a new tender of requirements they specifically put container based hosting although we'd been playing around with things like Docker internally we weren't confident enough to be using that on a production site of a client so I think platform was really the only option within the EU at the time and they'd seen the demos already at the British Council I think the challenges for IXIS is moving the client on to platform I think we did the proof of concept concept early on and the differences between standard and enterprise at the time were quite different and trying to migrate 160 sites there's a lot of different steps and process and some of them were manual, some of them were straight forward through the CLI tool and I think the British Council decided they didn't like the two different ways of doing things because they wanted to be able to hand it over to their service desk to carry on managing it when they brought new sites on later on so I think about a month in we made a decision to knock enterprise on its head at the time and then wait for the new integration to be deployed other problems at the time can't think of any massive ones that are there were some platform problems that were revealed by the way that you were using platform, we had never had a customer who deployed all 130 sites at once and we did find some limitations of our system that we had to work around likewise for backups which is how we ended up working out the sweet spot of how many backups you could actually execute at the same time without killing the platform we've got that now and it's been running smoothly now for what nine months I think at least actually got through a little bumpy bits but please hopefully responsive enough to call the project successful but I think that there were definitely challenges in terms of we were trying something that was new for both of us and once in a while we were running up against engineering challenges where as a product company the engineering team is not dedicated to this project so then you have to make a change request that goes into the engineering queue and comes back out the other side sometime later and so because we were doing some things that were non-standard and non-conformant with the main product stream I think that that did lead to some delays in satisfying some of excesses pain points along the way and there were definitely moments where there was some tension because of that so I'll be quite frank about that but we did get through them all One of the examples was the Roots-Yamel configuration we had some pretty complex regular expression redirects that were being moved over from a patches set up almost to get them to work the same way using platform system so I think we got some engineering I don't know how long it took but we got it done and as British Council said we got the project all finished time even with these delays and challenges for both of us Yes sir? Oh sorry, go ahead You mentioned that you are doing regular backups so amount of data or whether they are in commando So from the side where Ixis have set up Rundec we execute a job every night which basically just calls platforms to do a snapshot how that works I don't know if you want to fill the gaps there So what snapshots then is everything that's persistent on the disk then they get a consistent version of the site including the database, the uploaded files and solar all is one unit that can be restored The underlying technologies do incremental diff based snapshotting so the storage aspects of that should be very efficient but that's more of a cost of goods question for us rather than something that's customer facing Let's jump back here and I'll just repeat the question So the question if I understood it perfectly correctly was could we do a before and after comparison of the resources that are needed to run these sites and the availability So let's talk about the availability first I think that it's not a fair comparison and that you wouldn't be comparing a multi site to a container approach in that case I think that you'd be comparing a very problematic previous hosting experience that had systemic failures that were causing site downtime to do with the site itself plus a deployment process that took the site offline for four hours So it's a little hard to compare because those two elements of the downtime have been removed So it's a much more stable situation altogether and the deployments are a lot shorter So I think that's a little bit hard to do In terms of the overall server outlay, I'd expect that we use more server resources than it previously did but that's because we offer a lot more functionality and we take not only the production running of the sites but also an extremely flexible development workflow where you can create a new copy of the site on the fly with all the services and all the data running and start developing on it and then in an organized way merge that into your workflow and that's a completely different product So I think that that overhead alone introduces more server resource So it becomes a little bit hard to compare exactly what the difference is because the scope changed significantly One question regarding platform How did you compare yourself with all your services against Docker At least regarding performance capability Well that's a really hard question because you have to pick it apart a little bit to actually know what you're trying to compare So yes we use a containerization technology underneath it but our customers don't see it So I think maybe the most interesting point of comparison would be what would you have to do what we do if you started doing it with Docker right now and you'd have to do an enormous amount even if you use Docker cloud you'd have so many responsibilities in terms of structuring your application in terms of picking your Docker images and maintaining them in some way and owning security models and all of that whereas with a platform we hide all of that the level of abstraction that you think at when you're using Docker is a container and you look upwards towards the application you look downwards towards the infrastructure and you've got this middle of the stack view it's a very low level of view when you work with platform you have the point of view of a developer who thinks about an application and everything's below you everything's out of sight hopefully all you have to do on platform is say I want my SQL, I want Redis, I want Solar and I want these mount points and some other details and then you put your code in and it runs so the fact that we use containerization is actually invisible to you and in terms of performance and scalability when we need to scale really big anyway we currently don't even use containers to do it, we use full instances and we can run sites from six CPUs for one site to hundreds of CPUs for one site and we can go up and down between those two levels without taking the site offline and we do that without Docker or containers so these details these implementation details are not always the most important thing to focus on we use Docker in a couple cases in our product we use it for our own CI process and we also use Docker to simulate various deployment strategies for our enterprise sites so that we can mimic the deployment process that a standard site has and then deploy that onto an enterprise site so we use Docker but we don't depend on Docker or containers for the very high levels of scalability and as a customer you don't see the containerization level anyway do you have any support do you have applications? Not yet, but it's clearly on our list as well as .NET now that Microsoft is making it so you can run SQL Server on Linux that looks very attractive but maybe that's something for 2017 it's not been completely decided when that's the highest priority but it's not available yet okay I understand that there was a multi site so a single group of installation and you had to configure to customize each country if I'm correct so right now with the platform do you have any base code where you add up functionality or customization or remove elements I don't know exactly so the base code would be inside DruPull installation profile which is held at GitHub so they make all the new modules and the changes there and then the make files that Robert talked about in each of the country sites then pull in the installation profile code base and put it into each of the countries and then in each of the country repositories I've got any custom bits like the custom theme color changes any imagery stored there separately process thank you very much you mentioned about that real time for the regular build so the cloning is of the environment say for a production to get another copy to staging or something like that does that also take the same amount of time process is different than the cloning what was the second question we were syncing the production environment to copy it to a new stage is there some things different than the sync what is being implemented so we basically have two tools for people one you can make a new clone it's actually a git branch so the fundamental idea of platform is if you make a new git branch you get an entire site and you get that really fast and it synchronizes the data from a parent branch a master for example so that's one functionality that you have available is the creation of an environment with a new copy of data depending on a number of factors that takes in practice usually from 90 seconds to 3 or 4 minutes that's not very much dependent on how much data there is because of the file system technologies that we use underneath that's much more dependent simply on how busy the server is really that's usually the main factor so and that's one of our actual killer features that means that I can if I need to roll out a hot fix I push a button three minutes later I've got a new version of the site to test on I deploy my hot fix 3 minutes later everything is deployed so it's and I don't have to disrupt anybody else's workflow and I know that I've tested it on something that is an identical copy to production bit by bit the other thing that we do is simply a pure data synchronization and that's the same thing only without a new code deployment so it basically a data synchronization will start by throwing away your data on the current environment and it does a new very fast copy of the data from the parent environment into say dev or stage and then you have that data available and because of the file system technologies that we use even if it's hundreds of gigabytes that's going to take a couple of minutes if you compare that to a MySQL dump MySQL import a reindex of your solar and an rsync of your uploaded files then you've got a big smile on your face and you've just saved a lot of time yeah so one thing that platform does that's we implemented our own get you use get not like normal but when you use get you're running a get that we built and one of the things that we built does that other gets don't do is that it tracks a hierarchy between the branches so and you see that visually in the platform UI so by default platform new branches on platform are going to take their data from a parent because there's a hierarchy that you see okay and if you don't specify a hierarchy if you just like push a get branch to platform then it will have the master branch as the parent and then it will get its data from there good any other questions we've probably got time for one more so Alexis mentioned that deployment time downtime had gone from four hours to an hour and a half and Mike you mentioned that you've got it down to 35 minutes yeah those sound like scary high numbers for downtime is that actually just the deployment time and you know downtime is seconds because I use platform I know downtime is non-existent it was 126 sites so each form was down for the minute whilst it was rebuilt and then it was 35 minutes to work its way through the whole series of projects no no thank you right I think we've run out of time so thanks everybody for coming we've got a case study which the British Council approved today so that'll be in our site in a few days there's about several Ix's people there's about ten of us we've got to stand as you walk into the exit we're back to back with that query normally we're toe to toe we're back to back downstairs I'm coming through