 Ready to go, may I have a thumbs up? OK, everyone hear me? Morning, everyone. Thanks for making it early on a Thursday morning. I'm sure there's been a lot of partying all week. So you've survived this far. Well done. I'm Steve Lowe. I'm the director of technology at Paddy Power Betfair. And Rich here with me is going to talk about some more technical stuff in a little while. Today's session, we're going to talk about our journey to OpenStack from a slightly different angle to start with and then a little bit about the technology at the end. So to start with, we're going to talk about the journey of Betfair and the culture and how we've reshaped our technology team to take advantage of the cloud technology. And then we'll get a bit into what we did in the OpenStack journey as well. But first of all, Rich is going to tell you a little bit about us. Morning, everyone. I thought I'd put up a slide talking a bit about who Betfair is, but it's now Paddy Power Betfair. We're about day 17 to emerge. Apologies if we refer to Paddy Power Betfair, Paddy Power Betfair is the new company. I put this up because a lot of people don't know who we are or what we do. So this puts the rest of the talk and the project itself into context. So the Betfair part of Paddy Power Betfair is around about 15 years old and has got offices and technical centers around Europe, but also has some interests in the USA and in Australia. We're a very engineering-led company with around about 800 of the 1,500 Betfair staff as actual software developers and engineers. And as a result, that engineering and that technology is very high up on our agenda. We've got an engineering blog. Please go and have a look at betsandbits.com. There's a load of sort of ramblings about various things that we find interesting. But it's the numbers here across our products that make us a little bit special and a little bit interesting when we start to talk about some of the scale issues that we've had. So we have an exchange product that's quite unique. It's a way that allows people to bet and to lay against the outcome of sporting events. It's very, very similar to a financial exchange. But instead of business events, of course, it's sporting events. So one of you would think that a football team might win, the other person might think they would lose. Our technology brings you two together. And then when that transaction is done, it gives the money, the winnings to one person, and we take a small commission off the top. We've also got a more traditional sportsbook where the business itself takes the risk and you can gamble with us. And then we've also got some gaming products that allow you to play some amusing games in between those times. That exchange model drives some scale challenges, some of which I've listed here. So we have millions of active users, but the transactional volumes are what really sets us apart. So around about 135 million daily transactions down at the DB level is pretty much what we've peaked out with the infrastructure at the moment. And that's driven predominantly from that exchange. We're in that API Billionaires Club, along the likes of Twitter and eBay, that group. And we're at around about 3.5 billion API calls per day. And then we have some very interesting log and monitoring output as well. One of them is it's another open stack project, actually, sorry, open source project, open TSDB, where we have about 120,000 data points per second being consumed across the state. So all of these things together meant that this open stack project that we've gone down is not only quite exciting because it's open stack in this, but also we have to think about the challenge of the scale to go with it. So I hope that sets it into context. Steve's going to go with the first part. Awesome. So don't be afraid of this picture. I'm not going to tell you what DevOps is again. I'm sure you've heard it a thousand times. But I'm sure you've all seen a picture like this before. This was us about eight years ago. So we have our dev teams who are all about change, continuous delivery. We were just starting our journey to microservices. And then obviously you have all about stability, uptime, and reliability. So eight years ago, which was pretty much when I was employed by Betfair, we were growing massively. So we were talking 20% month on month growth of the customer base and probably about the same of the development team. So we were hiring rapidly to try and keep up. We had a huge backlog of projects. We had a huge monolithic code base at that point in time, and we just started breaking some of that out into microservices. But our entire deployment estate was probably six or seven apps eight years ago. That's how it all started. So obviously that model wouldn't work for us. We needed to move faster. We started a couple of years later. We've got 100 microservices now. So we've broken everything apart. We've got lots of small microservices. Our monthly deployment cycle of the past is now completely destroyed. Even if you did deploy monthly, we're deploying 100 things monthly, not three things monthly. Our poor ops team, if we threw them all over the fence, we're going, hold on, what's going on here. We can't cope with this. We can't cope with the amount of change. Trying to even do the knowledge transfer to that operations team was very, very difficult. So we built a dev ops team. Actually, if I'm completely honest, it was largely driven by the frustration of the development team to get stuff out there. And it was a kind of a shadow IT organization. We got a bunch of guys, some from the ops background, some from the dev background, put them together, gave them a subset of access, went through all of the regulation challenge and audit challenge to give them access, because we have quite a hefty regulated industry. And we got them to be able to do deployments and a kind of subset of stuff. So they couldn't really do network changes or big physical, that was still purely in the ops side of our world. But they could do deployments, folks. So that freed us up a little bit. And now we're starting to deploy weekly rather than monthly, maybe 50-odd releases a week. It's starting to get more interesting. But there's still a bottleneck. We have one team of people who have that access. It's not everyone in the dev world. Those guys were there. They don't scale. By now, we're up to probably about 400, 500 engineers on the development side. And these guys are now under a lot of pressure. So we thought, what we really need is we need to get those skills into all of the teams. Our dev teams need to be able to be self-sufficient as much as possible. So we thought the best way to do that was, well, we've got these people with skills. Let's stick them in the dev teams, like that. And then our dev teams by osmosis will automatically pick up those skills. Yeah, that didn't really work as well as we hoped, to be honest. We put them in there. All of the dev teams went, ah, here's our dev ops guy. Now he's sat with us, which means we can pile more work onto him. And the poor old dev ops guy now is kind of isolated from his peers a little bit because he's constantly under pressure from his dev team. And actually, we started losing some of that standardization. So we had quite a good chef set up for automatic deployments. Started diverging from different teams. We'd nearshored some of our dev teams. They're in different countries, trying to get two countries to talk to each other is really hard. So the guys over here are doing one thing. The guys over here are doing another. There's no common team to pull that back together. So it was an experiment that lasted about a year. And then we decided actually we were better off in the original model. But we made a huge step instead. This was probably the biggest cultural change we made. We put our devs on call. So some of our devs are now picked up some of the skills. We thought, right, actually we're now driving accountability as a business, right? We want people to be accountable for stuff. So our dev teams, who are building software and building applications, they need to be accountable for those applications and the quality of those applications and the stability of those applications. Best way to drive that behavior, call them up if it breaks. Now, some of our dev teams obviously didn't like that so much because they're now being called at 2 AM when their software breaks, such is life. They got over it. But it did drive the right behaviors. In the end, we have now dev teams who are on call. All of a sudden, all of your NFRs are almost perfect. Monitoring is now a serious consideration in the dev team because they know they're going to get called, they want that information at hand very quickly, right? They want to be able to solve it fast and go back to sleep. So all those behaviors we wanted were done right there. So our DevOps team became the backup, right? So our DevOps team got some extra sysadmin skills. Our dev team don't have all of it, but they can run a chef, they can look at a log file, and they've got some basic skills in there. But if it gets too far and it's an operating system problem or something, you've still got these guys to call up and say, actually, it's beyond my knowledge, help me. That worked really well for a long, long time. But again, it's almost all of it, but there's still some stuff we couldn't do, right? The networking is still out of the reach. We haven't trained our dev teams to be Cisco engineers. We haven't trained them in Citrix and all the different interfaces they need to know. So they can do a subset stuff, updating applications is fine, but if I need to change network roofing or anything like that, it's still beyond our dev teams. So what we need is something in the middle. So we couldn't really train all of our dev teams on the huge, wide array of skills they'd need, you know, firewalls, Citrix, Cisco, all of these things, right? That's too wide for anyone to get any kind of depth to do sensibly. So we need to automate. What we actually need to do is make the problem simpler for our devs, right? This is actually the real challenge. And also, to automate, it actually turns out that the guys who are doing the DevOps and have spent time with dev and have got some system knowledge, we actually move them to the operation side of the business and say, well, actually, you could teach the operation guys a little bit about dev, right? If you want to automate, you want these guys with all of that knowledge to build interfaces, to simplify it, to integrate that kind of stuff. So instead of just putting the operations knowledge into dev, we're putting the dev knowledge back into operations. The automation piece here, this is pretty much where the open stack part of our journey begins. Rich will get into a little bit more detail shortly. This is kind of our current state. So we have 800 plus developers. We have about 1,000 engineers, if you include the upside of the world in Betfair right now. This is pretty much where we are. I do have a vision of the future. I'm a director. This is my job, to have a vision of the future. Whether we actually get there or not is an entirely different question. But my vision of the future is actually we forget about dev and ops completely. There is no dev and ops. There is just a set of teams with a mix of skills. So I might have a team. I'm going to call them engineering teams to get away from my dev and operation separation. There's an engineering team that has an ownership of something, whether that's OK, you might have it for an application, you might have it for internet routing. You might own the Edge network. You might have a piece about internal security. You might have a team with databases. In that team, there will be some experts, Cisco guys, there might be some Oracle guys in our database space, whatever you need. There will be some development skills in that team, because you're going to have to automate it. You're going to have to build some tools around your thing. You're going to have to integrate it with some other stuff. So all of that should exist, and we'll have one technology team, one single view of the whole world. This is my dream. I'm hoping we're getting there. We're edging towards this model. We've started with some fairly simple things. We've standardized the way work gets done. So we use kind of a lean, scrum, Kanban mix, going on in Betfair. We use that across operations and development now. So you'll see the visualization of all the work. You'll see the tracking of the backlog. We've broken down ticketing quite a lot. So there are some things tickets are useful for. If I want a new laptop, I raise a ticket, and eventually my laptop comes to me. If I want to make a network change, because I'm trying to change the shape of my software, it's much better if I just go and talk to the guys with the network expertise, discuss the problem, get them to help me solve it. Everyone has a common understanding. If you raise a ticket, I want this application to be able to do this. If you throw it over the wall, the network team makes some assumptions. They do their change. Inevitably, it's not what you want. So then you raise another ticket. Round and round you go. It slows you down massively. So that's what we're trying to do in our world. Those big cogs in the middle are largely plugged by our OpenStack I2 project, which is our OpenStack deployment, which I'm going to hand over to Rich to talk about. Thank you. So Steve gets the vision, and my teams have the exciting job of making it all happen. So let's start from the beginning, and this is going to be a bit more practical around our OpenStack project, which internally is called I2, Infrastructure Second Generation. Not the most novel name, but it's served a purpose. So we'll start at the start. What were the issues that the business had? What were the needs that we were trying to solve by this OpenStack project? Well, at the start, we needed more scale from our infrastructure. We were growing. We were growing fast. And although the current estate was good for a period of time, it was getting near its end of life. It needed a refresh. We needed to be able to scale it out. We wanted to provision much faster, so we not only wanted to take our software deployment and speed up the ability to get that through to production, but we wanted to speed up a lot of the interaction that we had with the infrastructure as well. And we needed to extend our continuous delivery work into that. I wanted to be able to take a change to the infrastructure and push that through the same pipeline of testing and assurance as I do with the code. And ultimately, we wanted to give those dev teams control so that they could make those changes and push them down the pipelines so that they could alter the infrastructure that they're consuming. So we wrote down on the back of an envelope. We didn't really, it was much more serious. But we wrote down the requirements that we wanted from this OpenStack project. So resilience across our DCs. We wanted to run active-active across our data centers. That gives us that 99.99% uptime that we're all heading for. We thought we wanted a software to find network. And I say thought, because when we started this project a year ago, we really didn't know. And we spent a lot of time with some analysts. I'm with a couple of consulting firms asking that question, should we go down the software to find networking routes? And, usefully, 50% came back and said, yes, you definitely should. If you're doing a Greenfield deployment like this, software to find networking is the only thing you should do. And the other 50% came back and said, it's not been proven. No one's running it. Don't go near it. Absolutely stay away from it. So that was really helpful. So we decided that we would spend a bit of time looking at how we could bring this in. And we'll talk about some of the ways that we brought that through in a while. We want to centralize storage so that our devs could work out the storage that they needed and mounted in a way that was good for them. We wanted a commodity compute. I didn't really want to go into anything that required some exotic, only one vendor type build. We needed to provide virtualization as well as bare metal. Some of the exchange components run at tens, if not hundreds of thousands of transactions per second. And sometimes that extra 1% or 2% really does matter. But we wanted a way of putting in bare metal that used the same delivery tooling and the same system and process as we were using to spin up the virtual machines. Bare metal would also give us an option for containerization later, which was also interesting. We're a dev shop. We needed everything to have a rich API. I wanted to be able to do and encode everything as code. I'm starting calling this phase dot dot dot as code. I wanted to be able to literally change everything through a set of APIs and a checking of some code at the start. And of course, it had to scale. We couldn't just build it again as much as I love the project. I really didn't want to do another one in three years time. So we needed something that could grow as we grew with it. And I wanted to bake compliance in. I needed to make sure that what we put in place was fit for purpose in terms of the governance and the compliance that we had to adhere to. But why not push it even further and start to encode some of those compliance checks and some of those security features so that we're always at the latest. We're always at the safest. We've always got the most up-to-date patching. So this was my checklist of requirements. And then we had one very important choice to make. Where did we go from here? We could go and talk to some enterprise vendors. And we could take something off the shelf. And we could plug it in and we could consume it. Or we could go to the community. And we could find some of the best solutions there. And we could take those on board and be a part of the community. Now, there are a couple of things about Betfair at the time that made this decision quite easy for us. We have had a long history of using open source software, both consuming it as well as attributing back to it. So as a company, we were very happy about the idea of not paying for the source code if you like, but having it out there with the community. We also really like the idea of having a large community tens or thousands or hundreds of thousands of people helping us solve the same problems that that open source project has. And being able to contribute back to that as well. The idea of having a wide community using it in all sorts of ways, all with those same problems, is more attractive to us than having a business that will sell us something from a box with a limited set of resources and developers. And always that back-of-the-mind for our roadmaps and P&Ls and all this. So of course, we chose OpenStack, and that took us through to the birth of I2, this project. The project started probably an earnest around a year ago when we went out to look at vendors. And in a while, I'm going to talk you through the various parts of the project. But let me get into some of the nuts and bolts and tell you some of the tooling and some of the vendors that we chose and why. So it really is all about the tooling. KVMs at the bottom, you have an OpenStack layer. And then on top of that, we have a whole load of orchestration and a whole load of delivery tooling that is just as important as everything below it. It's the ability to pull together this tooling and knit it together in a way that gives us this end-to-end automation that would help my team achieve Steve's vision, but also help us make sure that we put in place the tool that the developers can use to check in not only their code, but also their intent for the infrastructure. So after some trials, after some looking around from some knowledge we'd had previously, the following tool chain is what we pulled together. And this covers everything from the top level orchestration and dashboarding so you can see your code changes checking in and walking down pipelines right the way through to where we put our artifacts and what we use for security hardening for some of the OS images. And this was important because, and I'm back to the dot-dot-dotters code, we didn't just want to encode what we've got there on the right, one of our apps. We didn't just want to look at how can we bring up operating systems in an automated way, the CD, the CI type work. But it's the stuff on the left here that was really of interest and really important. We wanted to bring that same model of CD to the infrastructure and I wanted the devs to be able to check in to get a series of code that would allow them to set up firewalls, storage, switching, routing, and even how they would consume the underlying compute. Literally everything, checked in, fired dev team, as code that would then follow all its way down through a series of testing prior to going to production. So let's talk a little bit about our reference stack and the guitarist will get the joke. We went out to market and we spent a lot of time trying to choose the right vendors. And I'm gonna talk a little bit about why we chose each of those. Yeah, let's go straight into it, why not? So we started off with, we needed the virtualization and an open stack layer. Now we were keen to make sure that we had some support around this, although we could have gone on this alone. We really wanted someone who could come with the knowledge and the understanding of setting this up to start with. And we wanted to be able to use their consulting services at the start to come in and help us and train up our guys so that as we went further down the project, our knowledge would increase and we would end up with a model where we could predominantly self support ourselves. And that's very important because in the horse racing and the football gaming side, things can move pretty quick. And sometimes we do not have more than a few minutes between a problem hitting and us having to fix it ready for the next race or ready for the end of a sock again. So we went out to market and we ended up choosing Red Hat as one of our main partners. And one of the main reasons we chose them was that cultural fit between Red Hat and ourselves. They understood the open source world. They understood our requirements to be able to play and to develop and to use the software to the best. And they understood that we would, we wanted to train up an internal team with their help. They gave us that level of support. They went a bit further. We have a high touch program, which means that we can help steer the roadmap of where some of this is going. We can have some input into how they are developing the open source platform. And quite importantly, they'd already worked with an SDM provider and were able to give us a recommendation there, which we'll talk about next. And a very important one as well, they were very proud of saying we could walk away at any point, which is odd for a vendor. But the subscription model meant that literally we could walk away at any point. There was no vendor lock-in, which is one of the things we were trying to avoid from the enterprise world as well. So we were looking for a software-defined network and provider, and we had a lot chat with Red Hat, and they put us in touch with Nuage. And Nuage became the third partner in what would be the IT program. And I say partner because we really have between the three of us, between Red Hat, Nuage, and Betfair, really treated it as a partnership. They guys come to our offices, they work alongside us, they talk to us, they chat. We go to events like this together. We really are in it together, and that's been very, very important in making the project work. Nuage bought some things with it that would help us. One, we were nervous about the SDN. They'd been proven at scale. They were big in the telco world. They'd come from an Alcatel-lucent background. They also bought with them a distributed app-to-app firewall model. So instead of having to pin out of the network to a large security device and back in again, we could distribute those firewalls between the apps on each of the hypervisors, and we could control those via policy centrally. So this would make the network more performant and make it more efficient for us. We had some reference from our other big customers which took some of that risk away as well. And again, they invited us to a customer forum that allowed us some early insight into the roadmap and gave us an ability to directly feedback to that business. And again, they would help us train up as we spent time into the project. Rooting was a little bit easy with Citrix. A client we'd used them before. We've used a lot of their physical devices. What was quite nice with this project is we took some of their virtual devices, and this allowed us to make all of our pre-production environments functionally identical to our production environments but have the advantage of those virtualized appliances pre-production and makes them a bit more movable and a bit more scalable for our testing. We also found that we were toying with the idea of maybe bursting a few things into AWS at some point and we could use that same Citrix management layer out in AWS as we could do natively. So it's very nice and flexible. Switching, we went to go with Arista and we chose this for a few reasons. But what was great about Arista was it was a great risk-mitigant for us for this new SDN thing that we'd heard so much about. So Arista worked very well with Nuage and the SDN, that was plan A, but if we had had problems, if things hadn't worked, we could maybe fall back to Arista's AWS model and use that. And if that hadn't worked for us, we could just fall back to dumb switching. But it gave us layers of assurance that if the SDN was not what we expected or needed a bit more time, we did have a backup plan. Pure storage was an easy one. We chose these guys. We'd actually used them before for some of our database work, but we chose them predominantly on the ease of management and also their performance. I love these graphs, because what it looks like is everything broke somewhere in the middle. It didn't. That's when we did pure, pure over the previous spun disk solution. And basically all of these are latency graphs, by the way. All of the latencies flatlined and it wasn't a break. We just needed to recalibrate our graphs by about two orders of magnitude to actually then see the latencies. So performance-wise, they were pretty good. And then of course, x86 compute. Now, I was trying to be commoditized with x86, but one of the reasons we went for HP is in terms of the ironic project, they were putting a lot of time and effort into this. And for our bare metal solution, which we'll be working on soon, that was important to have that bind from the vendor as well. So let's talk very quickly about the project and where we are at the moment and where we've come. The project was in four parts. And the first one was a proof of concept. And this would have been around about nine months ago now. So that proof of concept, we time boxed to four weeks. And having chosen the vendors, we used this four week to build up, in effect, a two zone open stack stack and put in place all of that hardware from that reference stack in, deploy onto it and do some functional and some performance testing. Partly to test the responses from the RFP were true and partly to prove to ourselves it could be done and it looked like it would all work together. And it really was important that we got to the end of that. We were happy with that testing because in effect, this was a milestone for us to unlock the next stage of the project. Four weeks was a little bit aggressive on timing. Betfair had a value called pace and we stuck by it. And in four weeks time, we'd actually managed to get there and we had proven that this stack would work. So we went to the second phase. The second phase was about seven months ago now but we set ourselves a target of six months to build what would become the seeds of the future infrastructure. So starting from scratch, we're now deploying in two data centers. We were now deploying all of the control devices at the scale that they needed to grow up. And we were starting to put in place all of the tooling and all of the systems, including operational bits and pieces to allow us to run this in operation. And the target was at the end of this, we would be ready for production, fit for production. We had to do some things to integrate with legacy because the next project, the migration project, would mean that we would slowly bring applications across. So we needed to communicate back to that legacy infrastructure in a way that would allow us to do that migration without interrupting the business. And we had an important decision around OSP7 as well. At the time OSP6 was stable, OSP7 was brand new. And we had a decision, do we take this now, consume it now, or do we sit and wait with OSP6 that we'd proven the POC on and then take OSP7 later. And we chose to go with OSP7 despite the risks. What we wanted to do was prove that we could update, we could put in a new version of it. We wanted to understand what that meant, but also OSP7 came with director which would make the seven to eight upgrade a lot easier later on. So that brings us to about now. We're at the end of this pilot phase now. Last week, we put some of the first applications into production with it, which marked the start of the third project, the onboarding project. We've probably got 12 to 18 months ahead of us where we're gonna take around about 200 applications from the old Betfair estate and move each of them one at a time or parallel at a time into the new estate. But it's not just a lift and shift. Some of these guys, some of the development teams will need to think about their architecture. They're going from an active passive environment and we're putting them into active active. And for some things, especially those concerning state, this brings its own architectural challenges. But we're also cleaning up a lot of the tooling and the systems that we're using. So as part of this migration, the teams will pick up the latest delivery tooling, the latest monitoring, and they will move to this way that they can self-serve for some of their infrastructure needs. But they've also got to decide, maybe they're on virtualized now or maybe they're on physical for performance needs and they need to work out what they're going to use in the new structure. And we've tried to make that as easy as possible. So what we've done is for every development team, we have ring-fenced a set of hypervisors that they will have, no one else, just them. So that means we get rid of any noisy neighbor problems, especially when you're performance testing. They know a test done now is gonna be exactly the same as when they go to use it for real because there's no one on the side who may have been quiet earlier that is now loud while they're running in production. It allows them to make that choice between how much do I put in pre-prod to production, how much testing do I want, how many environments do I need there. It allows them to make that choice if they're going to go on virtualized about how contended they wish to be or what they're happy to be. Maybe it's one to one, maybe it's eight to one, we'll give them that choice. And they also get to choose the flavor of the machine, how many VCPUs, how much RAM, how much storage and where is it. They get to make all of those choices as well. And all of that work that they would have had to queued for before, to get networking changes made, to punch holes into firewalls, all of that is now at their control as well for their apps. So we're hoping over the next 12 to 18 months we will bring all of the applications in and then the whole estate will be onto in the new IT project, which leads us to the last project, the decommissioning. So we will then take the old estate and we will slowly break it down and throw it away, freeing up space in our data centers and reducing that footprint. So that's the project in a nutshell. That's some of the reasons that we've chosen the vendors. We've chosen, I'm hoping it might generate some discussion. We have not only myself and Steve here, give us a wave down at the front. We have Robin from New Arge and Carl from Red Hat, who've kind of agreed to come and answer questions as well. We wanted as a customer to give the presentation, but we're very much, like I said, partners and very capable of answering questions that we've got around the whole project as well. So that's it. I hope you enjoyed it. Please come and talk to us if you'd like to know more. Thank you. So in your setup, you have what kind of control and charge back or show back or whatever, because if you give an app access to say whatever size they want or need, almost always they grab the bare metal with the 286 gigs of RAM and 75,000 cores and then they use 1% of it. Yeah, it's fine. How do you address that issue? Yeah, we never saw that coming, did we? Do you mean? Yeah, sure, go for it. Yeah, so it's a very good point. So we do have a kind of a showback model. We don't really show back costs, but we show back things like utilization, how much capacity you're using, how much is at waste at any one time. We're putting together our top 10, if you like. So every month, the top 10 of most wasteful, least moved on to our new technology, lack of unit tests. There's like a five page report of, here are the top 10 people who haven't done unit testing this month. Surprisingly, they're very competitive, our dev teams. You start showing this information out there. All of a sudden, they're desperate, hopefully to get out of the top 10. If anyone's trying to get to number one, we're gonna have to reverse that, but right now, they're fairly happy competing to try and get off those lists. We are very much in giving control to our devs and empowering those guys, right? Like you say, there is a caveat. Generally, if you give people enough information and you really wanna be the worst utilized of the entire company, they automatically deal with it, right? So that's pretty much our model at the moment. It's early stages. As I say, we only have one or two apps on here right now. We'll see how this pans out. We might have to be stricter. We certainly can take control more, but at the moment, we're relying on our dev teams and the managers of those dev teams. Again, that accountability, we hold them accountable for their product. That includes how much it costs to run it as well. The shame batch model is pretty, I like that. I might coin that phrase. The whole business is quite data driven as well. So some of the things we've done early on is we've taken some of those more performant requirement applications and we've given them some of the new infrastructure to play with and we've allowed them to go through those tests. And we've had teams before that, they wanted to bear met or they wanted exactly what you're asking. But actually, they can show for themselves, actually, it's virtualized. It's actually running faster than we thought it was. It's put much more performance. We probably don't need this anymore. So then, hang on, we can start thinking about the benefits of that portability of having a virtualized solution instead. Certainly the network, taking the network software to find and putting any leaf spine architecture in has massively improved the performance and we're starting to see some of the output of that. But the devs are quite good. They'll look at it, they'll play with it, they'll test it and if it's good enough, they're good enough, they don't get too greedy. Since the rest of you aren't asking any questions. Come on guys, come on. So the structure of your teams that you have and how they're set up, do they all report to you guys and that's how you were able to move them around or how do you deal with that HR issue of, I'm a developer, you can't make me into an ops guy or vice versa and that type of organizational cultural challenge, how did you address that and how does that work in your organization? So, as I say, we started eight years ago so we've had plenty of time to work on this. We still, we've changed our hiring policies so we now make it very clear as we hire people it's like, okay, we've hired you as a Java developer but we're gonna expect you to know something about networking and some of these things. We don't necessarily need you to come with that knowledge. We're happy to train, put some processes in place. We have internal training programs going off all the time to get people the right skills but over the eight years we have lost some people because they didn't want to do that, right? There are people who are, I am a Cisco engineer, I will never write a line of Python in my life in which case we are not the place for you to work and that's unfortunate. We also have a very strong grad program so we've been busy recruiting and hiring and driving our own skill set so we're kind of training our own engineers in a lot of places as well. Over eight years we've now got grads who have gone in to be team leads and even more senior management so actually that kind of, we've taught them from scratch. It's now we've built in their minds, they're now running teams and that knowledge is then being pushed down and reinforced. It wasn't a quick process. I'm not gonna lie to you, day one it was quite hard. We did have some people, it's like that's not my job. Very early, probably just before we started this journey we merged QA and Dev, right? We had a QA organization, a Dev organization and we threw that over the wall as well. We merged those skills together, we had the same thing. It's like well I'm a Dev, I'm not a QA, I'm not testing my code. It's like seriously you've got to test your own code, man. That's what it is, or better still test someone else's code. But yeah, it's been a long journey. If you wanna talk in a bit more detail, grab me afterwards and we can spend a little bit of time. I can go through the details of some of the programs we've put in place. One of the open track disses are on the short list. I think really the decisions were mainly around whether we were in an open stack or something else. I'm trying to think back of the vendors that we had. I'm also cognizant, I can probably only talk about some things and not others. If you wanna catch up afterwards, I think we've got one of our techies who can probably give you a bit more information about it. Sorry, second? We did have some rel in the environments before, yep. We have windows in the space also. There's a little, it's not the main priority at the moment. With the merger between Paddy Power and Betfair, some of the Paddy Power estate is much more windows, but predominantly this project was looking at the Linux-based side. You mentioned a lot of technical reason for choosing the component. Does cost play into your decision? Cost always plays into the decision. For us, the main requirements though were to try and provide something that was resilient, keep the website up as much as we could, and something that would give us that speed and agility. And it was the benefits from that speed and agility to react that was probably one of the main drivers. So I can give you an example when we took this program to the board and we presented the costs and the time scales. I came out, the CEO tapped me on the shoulder and said, if I gave you an extra 20%, could you do it in half the time? I was like, I'd like to say yes, but actually we're already quite aggressive on the timelines, we couldn't do it. So I think speed was definitely a higher priority than the cost of the operation. From a regulation and security perspective, you had a lot of challenges to go through. How did you address some of those in that now you're dealing with more of a, everybody does more, and that's generally frowned upon in most regulation discussions. How did you tackle and address most of those? You mentioned part of it, but is there more to it than just, they did this split and that split of what they could do or how did you tackle that? I guess there's a few ways to answer that. The first one is one of the reasons we went private cloud and went for open stack is because the regulation meant we couldn't go to a public provider. So that was one of the main bits to satisfy some of that regulation. The other part is the more you automate the more you put structure and auditing and trails behind this, actually the easier the compliance becomes. So yes, we may open up and give people the ability to do more, but that more is through the tooling and through the process that is there and produces very accurate logs that are complete about what they've done. We also allows us to automate in some of the more proactive stuff, making sure we've got patching on OSs and things like this. We'll run pipelines that produced on OS. So as a dev when you take an OS, it will already be the latest and greatest that we have in terms of hardening. So those things together, that proactive plus the auditing give us a very solid way to keep the auditors happy that we can show them everything we've done. Yeah, I think also in the early days we had a separation of concerns that you can't deploy your own code which was pretty much written into almost every audit and regulation requirement we had. The reason that dev ops team, one of the reasons we formed a team rather than actually trying to give the devs access to the start was actually we needed to go through a process, right? Our operations team traditionally had a higher level of security clearance. They had an extra set of restrictions and training around them as well. So we put that dev ops team in through that training so they were kind of, although they had a bit more contact with the devs, they still had the official sign off. And then we went through a whole, actually when you access a production box you go through a batch, and there is security stuff on there that logs every keystroke. We had to put a load of automation and control around that. So we can demonstrate the regulator. Here's an audit of exactly what everyone's did. Here's the guy that did it. He's got his own login. All of that kind of stuff had to be put in place. It took us probably eight, 12 months to get with the regulators and talk around this stuff, and go, this is what we're gonna do. Actually in the end, they're actually much happier now because they get it very clearly. In the old, it was like, so we need audit trial. It's like, well, here's all the tickets and some guy over here, one of this team did that ticket and we're not really sure who. Now we have a very mechanical log, right? It's there all the time. You want to see what someone did yesterday. We can produce it like that. So actually now they've seen what we've done. They're much happier with the fully automated kind of solution. Very strong audit in that space. Took a while to get there though. So you mentioned, I think in one of the early slides, I think it's seven geographically distributed centres. Has the pilot taken account of getting that open stack roll out out to all of those? Or how do you plan to design that? Yeah, so the main project will be across two data centres and the pilot itself built up both of those in lockstep. So right from the word go, from the end of that pilot, we'll have the capability in both DCs. So I think getting out to the dev teams that are distributed as well. So when we built this plan, we built it for Betfair. So we had all of the Betfair locations in mind. Obviously we've merged quite recently some of this cultural journey we've been on. Our new Paddy Power colleagues haven't been all the way through that yet. We're going to have to spend some time and bring them along. And it may well change. They may have had some learnings from that side of the business. We're not going to go, this is how Betfair did it. Therefore, this is the future. There's probably some good stuff from that side that we need to bring in as well. So there will be some flexibility in there. We are spending a lot of time travelling to Dublin at the moment and spending time with our Paddy Power colleagues. We're going to spend some more over the next few weeks. But the actual core of what we've done on the automation, I think no one's arguing with it. In fact, if we talk to all of the guys in the new business, their ideas are completely aligned. How we implement them in nitty gritty detail, I'm sure they'll have different ideas about, if you put 100 techies in the room, you'll have 100 views of how it should be done. That's always the way. I'm just curious to know how you're making decisions about which applications to move over to the new environment. Do you guys have a criteria based on criticality? Yeah, and it's not a very straightforward thing to answer. So what we've spent was an initial sweep throughout the dev teams with some estimates for what we believed were the minimum prerequisites that we needed before that onboarding. And then their own estimates back to us allowed us to groups into various pots of size. And we've got the group that should be very easy and straightforward, and we've got the group that we've got no idea how to do it yet, but we'll have to figure out when we get there. And between those, there's a broad spectrum. What we've been able to do is chunk that down into how long we think we need for each of those applications to migrate and then fit that into their own development roadmaps. And that will then extrapolate out to provide the plan over the next year, year and a half. We're doing an initial phase right now where we've taken 10 or so of the applications broadly spread, and we're taking them as a first iteration. The learnings from that will feed back in again to give us a bit more certainty for the next ones. I think it's probably the most complex part of the entire program actually is lining up that side. Yep, cool, any more, no? We're around all day, you really can't miss us in these tops, so if you didn't have any other questions, please do come and find us, and we're more than happy to tell you all about it. Thanks for your time.