 Hi everyone, my name is Jacob Lieberman, and I'm a Field Product Manager for OpenStack and I work for Red Hat And I just want to welcome all of you to Austin. I actually live here in Austin, and it's very exciting to me to see everybody here In Austin, there's a long list of wonderful cities where they have these conventions Vancouver, Tokyo, Paris, and it's just amazing to me that Austin's one of them. So Welcome and thank you for coming The first thing I want to mention is that I coach my kids sports teams, you know, in Texas. It's all about football But I coach rugby actually for those, you know, folks here from over the pond. I see the Betfair folks The I was coaching them over the weekend and I lost my voice. So a Good sense of humor will be essential for this presentation if you work with OpenStack I'm sure you have a good sense of humor already. So yeah, thank you. So I'll be here all week. So Let's get started Luckily, I don't have to do the heavy lifting for this presentation. We're very lucky to be joined by my co-presenters Or as I call them the talent First of all, we have John Quigley from Oak Ridge National Lab For those of you that don't know Oak Ridge is one of the premier department of energy Research labs in the United States and they do a lot of forward-looking work around Scientific computing and technology. So and John is actually the leader of the technical lead for their cloud plus HPC high-performance computing integration team I am an old HPC nerd myself from my time at Dell and AMD. So I am thank you I'm very excited to hear what John has to say and then also we have Richard Richard. Hey with Betfair. Betfair is the world's largest Internet betting exchange and he's the global head of reliability and operations. So you may have heard Richard present already earlier today, correct? Or not yet? Okay, not yet. I'm sorry coming later. So this is a preview of what you'll get later My clicker is not working. So I have to move back and forth Here's the agenda Very quickly. I'm going to introduce you to the topic and the themes and then move out of the way So we can hear from our presenters who were so lucky to have The name of this talk is going beyond OpenStack And I think that is kind of an odd title because after all we're here for the OpenStack Summit Right. So ostensibly this talk begins where the summit ends. We're going beyond OpenStack and understandably it's Perhaps not entirely clear to all of you or even to our presenters when I approach them what they should talk about So I'm going to share the two themes I gave to them To help kind of scaffold and guide their talk I'm going to share those with you as well to just kind of set the stage for their discussion Is that my phone Okay, and I should move through this very quickly. So first of all, there's an old saying right If all you have is a hammer everything looks like a nail Similarly for many OpenStack operators We believe that everything can be solved by OpenStack and the software that includes and When you're confronted with the nail, it's great to have a hammer. However Sometimes you might need one of these or one of those That's a drill in a stethoscope so this is This is kind of a message you hear a lot from Red Hat We're actually a full software stack enterprise software vendor so we can We have software to solve problems at the very top of the stack and I know this is a very marketing-y slide I promise it's the last one, but we have management software at the very top We can help you with that we can help you with your Paz the infrastructure it runs on the hypervisor the operating system All the way down to the cruftiest oldest parts of the kernel where you might need help Because you'll often find when you're working with OpenStack that you might trip a problem or trip a bug That's not actually a problem with OpenStack itself But at a lower level and when you reach a problem with that you need that lower level expertise to see you through So what the theme that I asked them to talk about was Similar to when you're holding a hammer everything looks like a nail But the hammer is not always the appropriate tool for the job. What challenges what business challenges? Have you solved with OpenStack and what challenges do you need other tools to solve and what were they? So that's the going beyond OpenStack part, so that's number one Incidentally when people talk about OpenStack and they compare it to a tool This is the one I frequently here mentioned, which is a Swiss Army knife, right? So we have a package of discrete tools that are bundled together and each one is well suited for a single purpose the I think a better analogy is This right so that's a spork It looks kind of funny from here, but so a spork combines functions from different tools But when you combine them in a new way it enhances the capability to use both of them together You don't need separate utilities To both spear and spoon your food you have them both here And so they multiply and complement the capabilities of one another. They're not discrete tools like what we have here So from now on when people ask you what OpenStack is say it's a spork It'll make perfect sense to everyone Thank you. Okay, good. So the second theme That I asked them to talk about was and this goes along with the story You're gonna hear a lot about at OpenStack Summit, which is digital transformation But this is the first line from Anna Karenina, which is a famous Russian novel By Leo Tolstoy and he says this is the first sentence of the book all happy families are alike Each unhappy family is unhappy in its own way Now what does this have to do with OpenStack, right? So The second theme I asked them to talk about going beyond OpenStack is is this wouldn't it be nice if every OpenStack deployment Took place in a perfect green field Where there was no legacy software there are no traditional boundaries between business units no politics no turf wars no old Sand hardware that you have to figure out how to cram into your software-defined storage There are so many things that that so many Challenges and problems that you might not run into When you're faced with something like this in a perfect world We all have this and all of our applications are perfectly scalable and we can run them on a cloud without any modification Right, but in reality It's more like this so OpenStack usually doesn't exist in a vacuum usually as part of a digital transformation People are moving to OpenStack from somewhere else from the place where they already live and work and It comes into an ecosystem of existing technologies processes assumptions personalities politics and divisions The we see this very frequently for example What do your storage is let's say you're at a large financial institution and the storage folks have always operated the sand Well, what happens to those storage folks when now your storage is just an application Similarly with the networking Beyond that You might have this guy Right, this is the guy who does everyone know what this is so office space Of course this movie was filmed in Austin and you're all here in Austin, so I had to throw it in I hope it's not some kind of copyright violation But you might have traditional Political divisions within your organization and maybe some people who are accustomed to doing things in a more traditional Conservative way who just don't really get what clouds about and you have to convince them You have to find a way to work with them and win them over and you might also have this guy Right, so you might have this guy. Of course, this is Bill Gates the You may have to run windows workloads or workloads from another operating system Another vendor of course these things work on OpenStack now and we're getting very friendly with penguins right now But just by way of example a lot of the people that are moving to OpenStack Maybe are coming from a completely different mindset, which might be VMware windows something like that so That all ties into the second theme the second theme is You know in a perfect world We're deploying into a green field environment where all of our applications are ready But in reality there are many challenges you might encounter and traditional ways of doing things So I also asked them to talk a bit about that. Those are also things that go beyond OpenStack So I think we are right on time and now please allow me to introduce John Quigley All right. Hey everybody, thanks for coming Before we start quick show of hands, who's running OpenStack right now? All right, not too bad. All right, so you'll get some of my jokes then I guess Am I doing something wrong? Didn't work. Okay, gotcha All right, so hey, I'm John Quigley. I work at Oak Ridge National Lab Jacob gave a pretty good introduction there a second ago But I'll just share a couple things with you so real quick about me I'm sort of one of these guys who's worn different hats over his career, you know, CIS admin cyber security developer As a CIS admin you kind of have to wear a lot of those hats anyway, but Now I find myself in sort of a mix between a tech lead project manager CIS admin role Anyway about Oak Ridge National Lab It's a really large science and technology Laboratory run for the Department of Energy We're located in the beautiful Hills of East Tennessee We've got about 4,400 employees spread over about 4,400 acres of land, but we don't each get an acre unfortunately That would be nice some kind of homesteading benefit But anyway, among the many areas of research that we get into These are just some of them and you can see there's a bunch of different kinds of sciences there from materials to neutron To sort of more integrated ones like climate science and national security so Also, we operate what we call DOE user facilities. So these are world-class facilities that attract thousands of Visitors throughout the year So we have scientists coming from almost every country to come on site use our computing resources use our scientific resources And of course we have Titan, which is the number one Super computer for open science right now There's a computer in China. That's that's a little bigger We were number one a couple years ago. I guess they're number one overall But in some ways, they're not as open as we are so we get to say we're the number one open science super computer Right go go USA So anyway, so I mentioned a lot of stuff because I want to talk about I Want to talk about open stack in the context of what we do at the lab So Sometimes it's hard to know, you know, what is a solved problem in open stack What what kinds of nails need the open stack hammer can can can hit and our journey has been to to try to figure that out It's it, you know, open stacks a broad project. There's a lot of different projects There's a lot of promise there's a lot of things to get excited about so so our journey has been to kind of figure out Which of those pieces we can you know integrate into our into our sort of traditional or legacy Compute environment and of course at Oak Ridge. We've been doing compute for a long time So we have a lot of sort of entrenched to views on how to how to manage clusters and how to manage compute so you know when we first became aware about the stack we got very excited and You know, we were kind of thinking like oh single pane of glass You know, this is something we can use to deploy everything and there's truth in that you really can But it's not always possible or maybe always recommended So as we go through the presentation once you think about three zones that Of the open stack ecosystem So you can have services running inside of open stack These are these are services that open stack spins up and presents to you right yet services running underneath open stack These could be things that that deploy open stack for example And then you have services running alongside of open stack and these may be just traditional IT services throughout your organization That open stack has to integrate into and so the challenge is obviously to choose wisely You usually have options for where you run something in those three ecosystems Okay, so I want to talk about our customers and kind of why we you know their needs and why we chose open stack So we've got a Lot of scientists we have you know thousands of scientists and I kind of break them down into a couple different categories We've got the internal ones the ones we hire. We've got the the support staff around them Which is a bunch of you know IT staff Electrical engineers chemical engineers mechanical engineers and then we have all the visiting scientists that I spoke about a few minutes ago So here's some of the here's some of the challenges that are faced by our customers Some of their expectations rather and then and then the challenges for how we deliver Data movement we got a we got to move large amounts of data around From from the point at which they just created to the point at which it's analyzed Our researchers have this expectation that there's all this free compute storage out there because we're a fridge Right, we got the supercomputer, you know, give me some time on it But why can't you guys just give me a hundred cores right now? They expect a fast deployment turnaround and that's because you know things like AWS have kind of raised Everybody's expectations, right? If companies like Netflix can go out there and and spin up thousands of instances, you know, why can't why can't we do that, too? Our researchers demand high-scale. They usually need to run over hundreds of thousands of cores They want high throughput high capacity low latency IO fast big everywhere They expect utility IT, but they want it to also be innovative and that's of course challenging And then they want analytics and workflow as a service Okay, so those are the the demands of our customers the demands of our staff We've got you know prior to this open stack project. We had a lot of different compute and storage silos many different operating systems different management practices, so we really saw open stack as a way to kind of Maybe bring some of those practices together under one team Start to deploy things a little more cohesively But you know before open stack, you know and even now a lot of the integration between open stack components and legacy IT is done manually all right, so You know we really really desired like a unifying platform. We thought well. Gosh. This is like the one thing that'll rule them all right Okay, so let's take a look at where open stack fits in with some of these customer expectations, so Data movement That's something right now. We don't we don't do a lot with open stack. This requires a lot of big iron Really really fast networking So we're not doing that with open stack yet, but the free compute and storage and the fast deployment turnaround That's something that we can do with open stack. I've got a little asterisk next to it because That's to indicate that there's a caveat there right there There's still maybe some other things you need like a cloud management platform if you guys remember from the AT&T Keynote few minutes back they built a lot of tooling around open stack is an impressive amount actually and some of that tooling they built Takes care of sort of the cloud management layer, right? So that that's kind of what I'm talking about there Come on down here to utility IT and innovative IT open stack and deliver on that too and of course anything as a service You know Open stack has options for you there All right Our customer workload types You see the word platform listed there a lot So we got a lot of platform type needs for data analysis, you know, they want everything from Just websites to software repos to big systems that they can develop software on and run analytics on and they also need a lot of big iron for High-performance computing jobs as well. So let's see how open stack fits in with those workloads Now it doesn't doesn't hit all of them right now. So so far. We're not we're not doing so well on the one tool to rule and wall But for folks who want bare metal for development and test We can't do that with open stack just yet, but the ironic project promises a lot there We're looking forward to to deploy in that for platform for websites and software development data analysis Open stack does great for that It allows us to spin up large VMs or small VMs for folks to satisfy those needs But when you get down to the HPC side, we're not quite using we're not quite ready to use open stack for that yet But like I said a minute ago, that's what we're hoping that the ironic project can deliver for us Let me go ahead It's it's a little bit of that but just imagine you have hundreds or thousands of servers that you want to boot Immediately into into some kind of HPC cluster You don't yeah, you don't want to do virtual machines for that right now You want to you want to deploy that on bare metal the low latency part there is the is the kicker right so You know open stack is sometimes marketed towards organizations who want to do cloud native type deployments, right? Well, as we heard in the keynote, it's also an appropriate platform for sort of traditional IT as well, right? Anybody familiar with the the cloudcast podcast has anybody heard that one? It's a great one check it out if you haven't heard it But there's a recent episode where they were talking about this and this guy was saying that IT outside of Silicon Valley, you know a lot of organizations aren't doing cloud native applications, right? They just want if you think about the cattle versus pets analogy They want pets, but they just want faster pets or bigger pets, right? So that that's what we got at Oak Ridge. We got a lot of people wanting pets and here's here's a good example So we got data coming off of scientific instruments Lots of data it gets sucked into open stack through some kind of workflow virtual machine These virtual machines can have a lot of cores a lot of RAM and It performs some kind of business logic on that data and then ships it out to compute clusters, right? So right now open stack is really managing kind of the central piece of this ecosystem, but not the edges just yet May get there later All right, so going back to the IT needs. Let's see where open stack kind of fits in with some of those It's enabled us to kind of Consolidate, you know the different operating systems. We're deploying different management platforms. It's kind of helped us Develop some shared management practices and it's given us some freedom to do virtualization VMs and containers not so much on the bare metal side just yet all right, so to kind of wrap things up here We talk about, you know, you need like an integrated solution stack. So here's just here's just some of the things that You may have to Integrate your open stack deployment with that are sort of outside of open stack. These are services running outside So you might need some kind of cloud management platform So there's manage IQ or the the red hat supported version called cloud forms That's something that we're that we're investing in right now this allows you to Present a better portal to your users and then you can kind of develop some workflow that knows which organization your users coming from What kind of resources they're allowed to spin up and it allows you to maybe even charge back to users organization It allows you to also deploy on different platforms like VMware and AWS as well if you if hybrid cloud is something of interest to you Day center virtualization, you know, like I said, sometimes pets can be shoehorned into open stacks Sometimes it's not ideal. Sometimes you just want to deploy to VMware instead And you know cloud forms gives you the ability to do that Some other things obviously file systems a lot of our big file systems reside outside of open stack We have hope that in the future will be able to deploy parallel file systems from within open stack, but we're just not quite there yet IPAM DNS that's stuff that we right now We have to kind of manually integrate ourselves and of course identity like active directory and all that we have to do that outside of open stack All right, so here's just a quick quick view of the different kinds of systems that we have in our our private cloud HPC project We've got We've got some some cray hardware here running in a graph analysis A lot of heterogeneous compute here We've got some compute racks that are that are the same that we use in our Titan super computer We also have our private cloud. We've got about a 4,000 core private cloud That's split among a couple different versions of open stack some dev some in production We have some sort of institutional clusters as well That we that we manage outside of open stack as you can see there's some of the technologies We use there on the right hand column So, you know eventually we would like to expand open stacks reach You know throughout this stack But right now we have you know, we have other software that we have to rely on All right, so That's it from from Oak Ridge. Let me let me turn it over to Richard here Hello, there's two names up here Richard. Hey, that's me and Steven low. Give us a wave at the back Steve Steve's here as well. Have you got some questions afterwards? We're both around so please come on across You can't miss us in these shirts at all. Um, I'm gonna take a few minutes and I'm gonna talk to you about Paddy power betfairs open stack journey If I start calling it betfair, please excuse me We're about day 70 into a merger and I'm still trying to realign my brain into the new We are Paddy power betfair, so I'll try and do my best I'm gonna spend a few minutes and talk you through why we chose open stack where we thought the The value was that was coming from the problems. We were trying to solve why we chose red hat And then I'm gonna go a bit more practical And I'm gonna talk you through what we've done and the phases of the project that we've put in place to achieve that Hopefully some questions afterwards But first of all, a lot of you will never have heard of Paddy power betfair So let me give you a quick journey and just put into contact some of the engineering that we're doing Betfair the second part of Paddy power betfair was born about 10 15 years ago Offices across most of places in Europe, but also in the USA as well We've got an engineering blog. Please go and have a look bets and bits.com for some interesting stuff on there But we're a very engineering centric company. We've got about 800 software developers in Company of around about 2,000 people, but what makes us really special is the products So we are an online gaming company and we offer a betting exchange, which is very much like a financial exchange It allows you to Use the outcome of a sporting event to either bet or hedge bet against the outcome And we match you with other people who have maybe differing views from yourself And then when the game is over we bring those those two things together and where the winner gets the winnings And we take a small cut off the top So that's the exchange model and with the exchange model comes some engineering challenge around scale to get it working So we have millions of users, but it's really the transaction parts our exchanges You can hit it via an API and with that comes high frequency trading Bots trading against you and as a result we have about 135 million transactions per day at the DB level to that exchange will generate Coming from around about three and a half billion API calls a day So we have a very a very for us kind of a large-scale engineering challenge And we were trying to put open stack into this there's some other things there some fairly big log output about 120,000 data points per second of monitoring that we do And we try and push out we're trying to get faster on our deployments And if our open stack project was trying to help that we're about 500 a week at the moment This is across all of our software. This is pushing out stuff into our infrastructure there I'd like to get the down to 500 a day and it seems like we'll we'll get there And then of course recently we merged with Paddy power to form a footsie We're actually about a footsie 50 I think but a footsie 100 company with a large market capital So it's all good right that's better Paddy power better Let's talk to you a bit about how our open stack project started. I too is the internal codename for our open stack project It's not wonderfully inventive Infrastructure second generation I to it's the best we could come up with but we put all of our time into this engineering So that was fine. So what were we trying to get? We were trying to get more scale from our infrastructure. So what we had had done us pretty well so far But it was getting a bit old it needed renewing And we needed something that could pick up the pace and follow us on you know as the scale keeps growing year on year We wanted to provision faster. We're trying to get down to 500 releases a day We had to provision really faster Specifically we'll talk a bit about some of the infrastructure stuff But specifically we wanted to get from it's taking days maybe weeks to push a change up Maybe you need to send me hardware underneath maybe you're trying to make a new cluster Maybe you're trying to launch a new service. We're trying to go from those days and weeks into minutes We're trying to get down to a deployment with all of its testing and everything in the you know 10 20 30 minutes type range We wanted to extend our continuous delivery into the infrastructure layer and this was quite important because a lot of those delays in bringing a new Service online is I'd like some new servers. I'd like some holes punched in a firewall I'd like to set up some load balancing I'd like to set up a new network range and all of that involved putting a ticket and that ticket got large and then Someone in that engineering department picked up that ticket and then they misunderstood what you asked or maybe you asked it badly And then they did something that you didn't want and then you had to go around again And this is what led to the you know days or weeks to start deploying some of this stuff Now we're pretty good at putting continuous delivery around our software What if we could take that and push that back into the hardware side as well and actually give our development community the ability to Create their own network topology to create their own storage amounts that would make us go faster and that's what we wanted And ultimately it's to give the devs the control we are we're a regulated industry So we couldn't just spin everything up in Amazon or a public cloud But our devs all loved AWS or they all loved Azure or anything We wanted to give them that same ease of use but in an environment that we could satisfy the regulators with so hence We came to our open stack private cloud So we looked at our requirements Pretty much on the top there we wanted resilient DC's I need to be able to lose a DC and carry on serving my customers And when the customers are hitting you thousands tens of thousands times a second I need to do that in a way that doesn't stop for one to five seconds because people get very upset Software to find networking we'd heard about this. We thought this this might be a good step But um, we'd never used it before but we decided the only way we were going to get that speed of deployment Get that ease of use to the developers was to try and go all in and put a software to find networking in as well And we'll talk a bit about some of that in a while Centralized storage wanted to be able to carve that up commodity compute I wanted to be able to rack and stack servers just anywhere and add them into that cloud It's got to do virtual and bare metal some of our boxes do tens of thousands of transactions per second And sometimes that one or that two or that three percent that that virtualization layer takes away counts So we needed to be able to do something that we could put a bare metal solution for in the future and with ironic We have to do that in a few months time Rich API for automation because our developers are developers and if you don't give them an API they get very upset They wanted to be able to use this they wanted to be able to code against it and internally on the open stack project We're all developers. Well, we also wanted to do this We didn't want something that involved having to hit a portal or having to try and kind of fudge some work around We need a rich API is from everything. We were putting in so we can instruct it as code And of course we wanted to scale for future growth and we wanted to bake compliance in so that the security model was baked in right the way through So we had a choice Here's our first choice Who do we choose? so We of course looked around looking for people who could come and help us achieve this and there's pretty much two camps that we found There's the enterprise software vendors who will come and sell you a solution that will plug in and they will help you set up And then there's the open-source community Party power bet fair the bet fair part has a very long history of being involved with the open-source community We push some of our own stuff and we consume a lot We help with a lot of the tooling and monitoring I mean from some other open-source communities So we're familiar with it. We're happy with open-source and we like the idea of having Ones tens hundreds of thousands of people in a similar predicament as us end users people who care Committing back to that central code base and us being able to help as well And you just don't get that with the enterprise side So it was a fairly clear choice for us that if we could find an open-source solution that worked We would be interested in looking at it and of course this led to the birth of our of it or our open stack project and And Yeah, we are we we did a fair few rounds to try and find who would help us and we settled on red hat and new arch and Ourselves they're kind of the three-way partnership. So red hat would provide us a KVM layer They would provide us an open stack layer. They would provide us with some skills And some consultancy to help us get started New arch would provide us with our software to find networking and again a lot of skills and training because this was new From us and then the three of us together made a commitment to be true partners in this endeavor and to push forward in A way that we could hopefully commit some stuff back to the community We'll also push the boundaries a little against what we may have been able to do previously on our own So let's talk a little bit about how we've done some of that Tooling of course is important the hammer the nail the Swiss army knife. This is our Swiss army knife So on top of our base open stack layer. We use the following tooling to give us that continuous delivery approach But unlike what we'd previously done we didn't just want to use the CD approach for the software So on the right is one of our apps and that was fine. We were already seeding that We've got the operating systems and the pushing of that software down that CD CI model That was fine But the stuff that we've got on the left here the firewalls the storage the switching the underlying networking the actual x86 provisioning We wanted to automate all of that and we wanted to kind of coin the phrase We wanted to do dot-dot-dot as code anything in this project I wanted to be able to define and roll out and test and deploy and scale as code and we hit every single one of These vendors that we've talked to every single one of these problems with that mindset if we can't automate it If we can't deploy it We can't give it to our developers to be able to do that then we need to think again and make that work And we did make it work. We came up with for us. What is a reference stack? A list of vendors that we put together to give us to give us this project Arista for switching For many reasons, but mainly because they would work with the software to find networking from Newarj If we had issues with that remember again, this was new for us They have their own software to find networking approach OS which we could use and if that failed We could just go back to dumb switching and we could try and automate that in other ways So a clear a clear works with everyone type approach Citrix for rooting when we're familiar with it already They have a virtualized version as well as a as well as a hardware product Which meant that we can I'm quite happy taking a virtualized solution over a physical solution as long as the functionality Etc is the same so we had a good way of putting that in place that allowed us to Directly align our production and our pre-production systems, so they are the same in as many ways as we could do Newarj for our software to find networking and our app to app firewall So instead of having to pin out of the network to a physical device on one side and then pin back in again We can put mini firewalls Distribute them around the edge of every hypervisor and we can define them manage that policy across all of those hypervisors So now we have a massive network performance improvement because we're not hopping in and out all the time And in fact empirically we're seeing some massive performance improvements from doing this Red Hat for open side KVM. We've talked about those a bit. I'm pure for all flash storage So we wanted something fast. We've done a load of performance testing on pure I've got some great graphs somewhere that say here's some latencies and then the old flat line It looks like we switched it off and it didn't flat line It's just the performance improvement was so much we had to recalibrate all of our graphs and We've chosen HP for x86 compute mainly for their involvement with the ironic project in the moment And the hope that we could get a bare metal solution Which then leads on to a nice containerized solution as well. So try on a future plan a bit Right in the last few minutes, let me tell you about how we did it on the project So there were four phases first one was a proof of concept Betfair has a value called pace which means we like doing things fast Sorry, Paddy power betfair has a value called pace which means we like doing things fast So our PSE was gonna be four weeks and in that four weeks. We were gonna build a two-zone open stack Stack itself and although it was only in 1dc We were gonna try and line up pretty much the same switching SDN storage compute As we would finally want in production and we were gonna run a load of tests against that to see does it work Does what the guys said they were gonna do on the RFP response actually work Can we test it and we did that and after four weeks? We were able to successfully function in performance test that stack and sign off ready for the next stage And the next stage was the pilots again quite quick. We're at the end of that six months now We just went into production last week with our first two applications But this was six months worth of work that was setting up the seeds to become our production estate So we're now in both DCs We're now building on the exact hardware that we're gonna use ready for the scale that we're gonna use We've got all of the integration back with legacy because during this migration part We're gonna have some apps left over there as we move some apps over here They've got to be able to talk still we've got to be able to continue to service the needs of our customers All of that delivery tooling slide We had to take it out of our heads and get it into code and actually make it work We had to make use of those APIs in order to get the infrastructure to be dot-dot-dot as code All of our monitoring services had to reach in and we had to make a decision early on Did we go for a very very early release? OSP 7 or did we stay on a much more safe and predictable 6 and being us we went for 7 because we thought we might as Well crack on we want to take all of the Deployments that come out of Red Hat OSP as well as some of the new our stuff and as fast as they can push them We want to be able to consume them and part of that for us is we need to use things like OSD with the director And our automation to take these updates and be able to push them back in so there's no point in us Checking and out at the start of the pre-production phase by not doing this So we went for 7 and it worked So we're at phase 3 and this is if you were listening to this morning's keynote from Arantis This is the 90% of the problem so the 10% the technology We now got something that looks good and now we've got to convince the people and we've got to get them to move all Their applications across and for us That's around 200 applications that need to move each with their own Requirements their own needs their own teams who either believe in us or don't We have to bring them all across so we're now in a massive drive to get it's kind of the hearts and minds piece To change the culture at Paddy Power Betfair to give the teams To give them this tooling and allow them to use it and get them to understand it and then to take advantage of that so they can speed up And some of this was quite difficult. Some of those tools Some of these applications had not been designed for active active across different DCs. They've always been well connected So for some of these there's a bit of a technical challenge to on board this as well Which means we really have to sell the benefits. Otherwise the teams won't put the effort in to make that change We have to get them to choose between that virtual and the physical so although we are promising bear metal We're not there yet and Previously their example of their experience with virtualization may not have been great So we're having to try and show them that actually on the new snack on this new hardware with the reduced latency from the from the networking specifically The all flash storage etc. It is actually fairly performant And in fact what we're seeing and perfectly from about the last couple of months of testing is that it is a Step up we've gone from bare metal proper old school bare metal machines in our previous estate to virtualized running uncontended But virtualized on the new one and they are faster 30% faster 50% faster, you know significantly faster and We've gone for a soft selection Approach as well. So if you're a developer in use AWS you can decide the flavor of your machine So why shouldn't you be able to do in our private stack? You can decide your tendency you can decide whether you are going to put yourself across multiple or resiliency zones, etc So why shouldn't we do the same in our own open stack? So we've done just that every development team gets a ring-fenced hardware We get rid of the noisy neighbor problems immediately They get to decide exactly they're split between pre-production and production in terms of what they want to put where they get to decide exactly the Contention if they're going to use virtualization or whether they want their metal they get to decide exactly if they're using Virtualization the flavors of those machines how many vcps how much RAM how much storage where's it mounted? Give them the choice. They're the responsible adults that we're employing and we will give them the tools to make the decisions That see their applications So we're hoping all of this is going to lay out that doormat and allow them to come in and welcome and come on board And we probably got I guess another 12 months worth of work before we start before we've moved all of those guys across And then the fourth part of the project is is decommissioning taking the old the old fighter plane that was awesome And unfortunately switching it off and throwing it away But we have to do this because as the new technology is coming in we need space in our data centers We need power and calling and all that kind of stuff But it's a project in itself to try and make sure that we clean up what we don't want to end up with Another data center crammed on side of the old data center and both of them still running So that was it. That's the project. That's why we did it There's a couple of faces you might see wondering around Steve as I said is at the back I'm here Robbins at the back from New Arge as well And a guy called Carl. I think is not here from Red Hat But what you really can't miss us if you got any questions either now or later in the day Please come and talk to us more than happy to talk about what we've done We've also got a very very technical session tomorrow midday a guy called Steven Armstrong who's Led pretty much all of the technical side of the project and then on Thursday morning at 9 Steven myself are going to be elaborating a bit more about this culture change that we've undergone Paddy power bed for and how that led and tied into this IT project. Thank you very much Okay, everyone. Well, I think we're right up on lunch. So just as a concluding remark Thank you very much to Richard and John, you know, it's it's always amazing to me that we can take this same platform and apply it to application areas as different as Online bedding exchange to scientific computing Thank you. Thank you everyone. Go have lunch