 We're good? Ok. Yn iawn i'r fimbodd, I'm Graham Peacock from TD Bank, so I won't bore you with the introduction again. I think you saw that in the keynote. You know, the background of the bank and the explosive growth over the last couple of years is compelling. Yn sy'n credu e'n f uphill, we understand the challenges that it imposes upon any infrastructure team, or any technology team. Ac mae'r hoffiad yn ba nhw yn rywbeth gyda'n ei ddweud â'r dde trulyniadau i sicrhau'r proses fel y sydd wedi'i gilydd lliwyddynt hwnna a yw'r hoffiad yn ei ddweud Yn y dweud, mae'r llai gennym yn ysgrifMR o'rrif, ac mae'r hoffiad yn ysgrifMR o'r yourfyrdd hyn mae'n pethau yn unig o'r cyfarith honiadwyr gwneud Fy wneud yn ei dyfu'r shifteis ar y bydd wedi gwerthu draw But really what we're doing is looking to impose a generational shift on the way that the bank builds, deploys and supports technology and that's a huge undertaking. So when you're building a rocket, the first thing you need is a monkey to test it on. So looking at the evolution of where we are and across the bottom of the page, I want to just list some of the features or capabilities that you would expect as you evolve through that journey. And I think you start in a very, very manual way at the far left hand side, your designs are entirely custom bespoke. And as you get towards the right hand side, you expect a degree of standardisation, commoditisation and automation. And it would be nice to go through a number of well-defined steps to get from the far left hand side to the right hand side. We didn't do that. We were somewhere within that red box. We didn't really distinguish between how we built physical machines or virtual machines. In fact, the process was identical. The release process for applications was identical and highly manual. And effectively what we chose to do is jump to the far right. So not necessarily the way you would want to embark on this journey, but nevertheless the way that we had to move forward. So the first thing that occurs to you is OMG. You start thinking about the solution that you think you need and the number of players in the market and the feeding frenzy that is happening across most vendors, particularly whenever they mention the word cloud. It generates a huge amount of confusion and it's a pretty daunting task. Yes, we understand the problem that we have today. We realise that we need to move forward, but maybe a better title for this would have been WTF. So in terms of direction, what do we think were the important criteria as we embarked on this decision? So, clearly we expect to be heading towards public cloud. We just don't know when yet. I think the banking industry, the regulators need to become comfortable with that. We need to understand the risks that we're taking on. We need to plan for that journey. We need to get there in baby steps as far as possible. We also know that we've got applications that we've accumulated over many years. The applications are pretty chatty. They're pretty latency sensitive. Jumping to public cloud would be disastrous for many reasons. So how do we get there? We also have a lot of congestion in our data centres. Shifting data centre topology is actually very difficult. Clearly we wanted to head towards public cloud by building a private cloud initially. But if you build that off premises and force our development teams to work off premises within the private cloud, then I think it eases the journey towards public. So that was one of the decisions we made fairly early on. We also wanted a fully managed infrastructure service. We see building hardware as really a commodity. There's other companies out there that can do that far better than we can. They can support them far better than we can. And that part of the cloud offering is very mature I think at this point. Platform as a service far less so. So we chose to focus our internal resources on the platform as a service problem and work through that solution. That was a pretty daunting task, but clearly we wanted to partner with somebody to manage both the off premises and on premises piece of private cloud and infrastructure as a service for us. Looking further out, we expected Docker to play a major part in our future. Clearly we need to get from the cloud offering that we intended to stand up on day one through to containerisation and manage the impacts on the development teams across the organisation. And bear in mind what I said on Monday. There's 10,000 people in the technology team at TD. Now beyond provisioning platform as a service, clearly we expect to move up the food chain. There's an awful lot of things that applications are doing repeatedly across the enterprise that could be and should be better done by the infrastructure team. So beyond containerisation clearly we expect to be in a position to provide APIs that provide API infrastructure services to those development teams and help consolidate and centralise some of that functionality. So that's the direction that we laid out for ourselves. Nothing too earth-shadowing there, but we were very, very new to this. Second question is who do I trust? One of the problems that we see in the industry is we like open source, but there's so many applications, so many components. And it feels like every time that somebody in the community runs into a problem, a solution is always to throw away that particular component and build a new one. So it's a very, very confusing space to work in. And I think it's also way too big for any one person or any one company to try to or be able to solve all of the problems that we're going to encounter. So partnership is crucial. So we started looking for people that can actually add value and help us navigate through this problem. So we were looking for a certain type of behaviour, certain criteria from the people that we wanted to partner with. Communication is crucial. As I said, it's a very, very complex space. A lot of different maturing technologies. Integration kind of works sometimes. And I think we had to navigate this in a very, very short amount of time. So moving at pace is absolutely key. We made the decision to move forward with a cloud at TD in about July last year. We were then given until December to stand it up for the first time. So a typical banking fashion, you're given a date and then work backwards from that date to make it work. Very, very difficult. Now, clearly we like people that are focused and deeply technical. And it might sound obvious, but it's very, very hard to find a lot of these people within the industry. Simplicity, I think, is key, given the complexity that I've talked about and the immaturity that I've talked about. I've been able to communicate the solution simply across a very, very broad organisation. Absolutely crucial. I think experience is everything. So the number of people that we have that are capable of working through very, very complex problems and coming up with very simple solutions to communicate them again and again and again across an organisation of 10,000 people. Very, very hard. So even finding people that have solved some of these problems before, absolutely essential to running this kind of project any matter of time that we had. I think partnership is bidirectional. Everybody talks about it during their sales process. Generally when you sign your signature on the contract, partnership evaporates pretty quickly. So we were pretty interested in finding people that truly believed in partnership the way that we did and actually really wanted to partner around product strategy. We're a bank, we're figuring this out as we go. Clearly there'll be things that we need to be built into multiple products for us if they make sense to actually allow us to contribute to the strategy across the community. I think openness is very, very important. There's an awful lot of proprietary solutions out there. Historically we've been a buyer and we've called a bank. But we're very concerned about being locked in to any particular solution that we chose. I think that was one of the most compelling reasons about the open source community. There's a lot of plug and play. If one particular component doesn't work out, we can switch it out. So those were the qualities that we were looking for. Moving forward, we selected seven vendors to provide the cloud solution with us. These are the most important. Starting from the bottom, I think it's crucial that you identify like-minded individuals. Smart, deeply technical, very, very motivated. A company like Risk Focus allowed us to shorten probably what would have been a two year time frame into something that we stood up at Development Cloud in a few months. By the time we have the production cloud stood up in July this year, it would be a year. By choosing the right people and focusing on people that are narrower but deeply technical, that allowed us probably to shorten what would be a two year project and deliver in one. The middle section, I think Rackspace needs a little introduction. Clearly once you decide you don't want to go down a proprietary route for cloud, you want to stay with an open source solution, then you're probably going to partner with the folks that created OpenStack. The logo I actually like, Rackspace logos out there, I think Dytrying, I think tells an awful lot about what we had to achieve and the partnerships that we built and the approach that they took. So there's a lot of people here and I think the philosophy is very much, let's do it once, let's get it right, but we are going to Dytry. Cloudify at the top, there's a lot of orchestration software out there. We were pretty long on Tosca. I think we like the abilities that it gives us to build standard reusable design components in a relatively straightforward language and to actually educate the development community to take the platform blueprints that we build and actually extend them and take them into the infrastructure software or the application space. And I think without Tosca and without specifically the Cloudify implementation of Tosca, we couldn't have done that. So that's a lot of the stack. The other pieces are a little bit more obvious. Clearly we use Salt, we use Rundec and the others. But I think these are the three key partnerships and the like-minded set of individuals that we identified and we brought together. And as I said, in the period between July and December, we stood up an off-premises cloud including full connectivity in a very, very short amount of time. OK. I'm going to hand over now to Vasil Avimov. He actually works at risk focus. He's going to walk you through some of the technical decisions that we made and the architecture that we actually ended up with. And then I'll come back. Hi guys. So I'm speaking on behalf of the TD implementation team but really there's no way that I can claim the credit for this implementation. I think a lot of the TD employees that were actually there and joined afterwards were actually much more instrumental to standing this thing up than I have a risk focus some of them are in the audience so if you ask me some hard questions at the end I'll just direct them to him. So of course when you start off with an architecture you need to first pontificate on what principles you want to espouse. And for us of course one of the really key ones was componentised architectures. And we knew up front that monolithic architectures would just not serve our purposes. We need to be able to not just put new apps onto the cloud but we need to be able to migrate apps which were never intended to do that. The whole tool chain is evolving at such a pace that there is nobody who has the whole solution and nobody will have exactly the solution that TD needs. So we need to break it up into pieces and then actually control the solution even if we use off the shelf components or buy components. The reason we go for open source is not because it's free but because it's open. It makes things debuggable, searchable and very very importantly there is a community around it which in essence acts like a force multiplier for your support. We did look at some of the full scale solution vendors and we just found that it was very hard to get an answer out. That was technical enough and to the point usually it just takes a long time. We of course needed to go with standards. Tosca is one of the few standards out there right now for orchestration and that not only allows us to avoid vendor lock-in but in terms of both the different IAS and to move potentially into other orchestration tools if we find at some point that there are ones that serve our purposes better. Now we've even actually exploited this capability right now because we were trying to stand up an Oracle rack instance into open stack and we weren't able to do it but the blueprint was portable and so we were just going to move it to VMware. Like our clients don't really care. Of course the problem is that there's lots and lots of integration points and I'll show you what are currently what our architecture diagram looks like. So the key thing to take away is that the control plane on the left drives the different IAS implementations both on premise and off premise. There are also as you can see a very large number of components and moving parts. However, the top what is it? the top left-hand corner I don't know if you can see my mouse over there is most of these components are actually all just Cloudify manager. So even though there's a lot of these different processes they all run within the same Docker container and represent one component. I think what you're seeing is that as you move into a kind of a tool chain both tools themselves are componentised and you almost get this fractal view of components repeating themselves. Now, there are concerns. So firstly things change at a crazy pace and there's so much churn backwards compatibility and consistency take a backseat. We chose seven products and we ended up with 45 on the one hand our efforts to try and limit the catalogue of technologies that will support as a bank we sort of ourselves tripped over ourselves. The other thing is you really need a high level of skills so you need the commitment that on the one hand the organisation is going to be willing to go out and hire very very capable technologist and equally importantly the commitment to training their existing resources on the new system the existing tooling and skill sets are just not sufficient. However, there is a lot of transparency so we see the bugs via Gears and whatever other bug trackers are publicly available you can see them being resolved. We also see very large numbers of them and the current open stack distribution that we're running on has something like 10,000 bugs so that's kind of a large number but then when you think about it who knows how many bugs are there in the closed source vendors so I think just because you don't want to see how sausages are made doesn't make them any safer. The other thing is there is a lot of enthusiasm in the community that speaks volumes. Of course self-service was one of the main things we wanted to address and so we because we chose a componentised architecture of kind of single purpose tools by definition there was nothing that would just fully glue it and provide a unified user experience so we had to build our own microservices in UI and also introduce concepts that were just missing for example we introduced the notion of a service catalogue that service catalogue is actually maintained by the various business aligned teams so they can have control over it and developers can go and select blueprints customise the flavour that they want to deploy and just click and deploy. Now when a VM is finally deployed it is actually fully bank compliant it has all the agents installed it's up to date etc. Of course another thing that was missing is kind of any level of monitoring and control I think if you enter into I think generally OpenStack is pretty mature from an operational perspective and I think if you're going and expecting a very luxurious ride you're not going to be in for a treat however because it's all open you can actually build tooling around it pretty quickly and you can see just some of the dashboards provide a lot of transparency also because our stack is way more than just OpenStack the tooling needs to be cross-cutting and again that's not something even if OpenStack did have extremely rich capabilities because that's not the only that's not the full solution we still need to build on top of it and of course there needs to be cost transparency right so not only are we changing the model of pricing but also if we have the self-serve module but at the same time figuring out how much you need to invoice your customers is a manual exercise we've not really solved the problem right so costing and transparency and self-service has to be part of that whole equation too so what approach do we take on in order to execute well it was just try and cut scope and run like hell so I think we just have to embrace the fact that we don't know what we don't know and we're just going to have to try and we're able to make significant progress it had its challenges and we'll have this slide but then once we actually stood up the first version of the cloud we had to stop and reorganize because you can't go from a tiny little team that's running at high speed to a large team that can service an enterprise without any structure and without any pause so we ended up restructuring the team and we broke it into three pieces so there's the core development piece which essentially is responsible for maintaining the cloud service layer there's another team which is responsible for acting essentially like professional services internally and helping the business align teams to migrate their applications onto the cloud they also act like a business requirements and testing team for us to a large degree and then there's a team which is responsible for just putting new services on that platform so if you we started off with only Red Hat WebLogic and WebSphere and Jboss and by now there's Jboss and IIS and SQL Server and Oracle and so on and so on so that's a separate team which doesn't necessarily need to know everything about the cloud service layer they just need to know about how to move different platforms onto the cloud and in some ways the expertise within those platforms is more important than a full understanding of the cloud and we had challenges so literally probably the biggest change is that we're constantly behind we're always catching up and since we're onboarding clients while at the same time we're trying to build out the platform things break and that just requires a lot of support from management thank you Graham and from all the clients you need some understanding that we are trying to compress the time frames which means that it will be more of a rocky ride to get there and of course we've had outages just any outage we've had it so we brought down RabbitMQ and brought down the whole of OpenStack we had bugs in every single bit of software stack and it's a pretty deep software stack so we've been just encountering a lot of that and there's just a very complex connectivity extending a network to add a few floating IPs shouldn't turn out to be a week long debacle but you learn and be incorporated so back to the processes you know that the acela train which is clearly way faster train than the one on top runs between Boston and DC so the old trains average speed was 63 miles per hour the acela's one is 65 miles per hour so just changing putting in a shiny new toy without changing the processes will not give you the solution that we're after so there are examples of this that we encounter on an everyday basis so the security ID solution that we were using required for people to go in once a machine comes online to go into a portal and enter a DIP that just does not work very well when you have a cloud thankfully in this specific case it turned out that there were APIs and we could automate it but that was just one example for one agent so rearchitecting the processes is one of the really key hardest things to do and Graham I think with this I can hand it over thanks for seeing so moving forward or not there we go we've accumulated processes in the same way as we've acquired businesses and working across 10,000 different technology individuals probably something in excess of 20 different business lines the processes tend to sprawl and there's many exceptions the top left hand corner is a real page from actually one of our infrastructure design processes that's page one of five so historically the process to design, specify move money around ultimately purchase, build a servers actually can take something between 30 to 90 days now we would have loved to just drop a new process on top of our shiny new cloud but then we'd end up with two because most of the applications move into cloud actually need to span both the old and new environment until we get all of our data sources to cloud there's an awful lot of cross connectivity and that means you're forcing many projects if not all projects initially to follow both the old process and the new process now clearly they're both moving at very very different speeds and inevitably what happens is that the new cloud process ends up waiting for the old one and not just hours or days but weeks so we had no choice but to take a step back and say look we're going to have to define a single new process and it's going to have to have sure the occasional branch that caters for dependencies on the old world but we're going to have to define educate and move the entire organization to that new process so very very difficult task when you think about where we are and the fact that the existing process involves many people across many many teams and the way that teams typically build process is they provide gates or maybe better description as a wall directly in front of them so they protect themselves from everybody upstream and breaking down those walls integrating those teams creating new process and then looking at automating that process I think it's probably the most difficult task of the entire project yes getting 50 or so open source components to work together very very difficult changing the process that 10,000 people have been living with for the best part of 10 years very hard the two things are now favour so the bottom left hand corner clearly we have a reasonably well defined PMLC or SDLC process we're pretty waterfall in the bank it's very hard to be agile given the process I've just defined so waterfall helped us and you know whilst some of those gates some of those walls that I mentioned weren't terribly valuable at and certainly weren't going to appear in the future state there's a couple of gates that actually helped us so firstly anybody you need to serve in the bank has to come and talk to me so clearly implementing a cloud first strategy becomes very easy you have to convince me to buy a server otherwise you're going to cloud so I'm granting very very few exceptions so if anybody wants to run a project I'm forcing them all to cloud and there needs to be a very very good reason why they can't do that and I've got air cover right up to the chief executive to do that very important and the second thing is you need to also consider the implications of a self-serve model because wants to play with a lot of shiny new toys now if you remember back to Monday one of the problems we're solving is the diversity we have in the data centre so yeah we selected 7 technologies to build cloud on they bought 40 something friends in the open source community so we introduced 50 technologies let's say but we had a catalogue of over a thousand platforms we've only decided to move 7 of those to cloud and Lucille mentioned them WebSphere, JBoss, Windows, IAS and Linux that's it if you want self-serve if you want the instantaneous gratification that you desire use one of those 7 if you want to get off-piste and use something from the thousand line catalogue you're used to fine I'm going to make it very very hard for you to build it if you want to build it I'm going to make it very very hard for you to do it quickly it's going to be manual, it's going to be very very slow and I'm not going to automate it your choice it's important to know that there has to be a second gate so I've talked about gating the cloud first model the second gate has to be around quality so if you've got people playing in a self-serve sandbox and they are consuming a lot of resources in that sandbox and they don't really want constraints you have to have a gate ahead of them promoting software into the upper environments and clearly ahead of them getting anywhere close to production and that gate really does two things it allows us to take a look at what they've done to identify things that they should have done better or differently and actually work with them to introduce things that we missed so that the next person doesn't have to jump through quite so many hoops and I think that's the process that we're working towards clearly as we extend across other functions it gets much more complicated but I think the most important slide in the deck the process of re-engineering aspects of introducing a cloud on an enterprise of this size are huge moving forward we talked a little bit about adoption governance on Monday now the top left hand corner is really an adoption or maturity curve or S curve as the industry calls them and clearly we're trying to jump from one S curve to another that's dramatically different but as you begin bottom left hand corner of that curve you're innovating, you're prototyping you have no users, you can move very very quickly as the product matures you slow down a little bit you're acquiring users, you discover problems you never envisaged and I think the way that you want to drive adoption the way you want to govern adoption changes clearly the product becomes mature at some point you start ramping up that incline the product gets mature, it becomes more standard it becomes more of a service and teams like mine, the engineering team can disengage a little bit not that we're going to stop investing in the platform it's crucial that we continue to do that but the way that we're gating people that want to run projects and the way that I'm forcing them to cloud hey, there's no choice, you're using it that changes, that's reactive so clearly as you start going up that incline and you start plateauing you need to be very much more proactive so the people that I'm forcing to cloud people that are adopting cloud right now are people that are running projects and you essentially do that because it's part of your business strategy for this year or because you have a major currency issue this year that you have to adopt there's a huge part of the application portfolio that the businesses aren't touching this year they don't need to change those applications that we're approaching the problem right now those applications would never migrate to cloud so really at the bottom clearly I've said that we have a lot of support from the chief executive down the CIOs across the business lines are all in on this solution and they're all committed to moving 80% of their application portfolio to cloud within five years you don't necessarily want to trust them all to do that so really you need a different type of governance you need to spin up a program team independently from the platform team whose job it is to proactively work across 20-something business lines to actually understand their application portfolio to understand their application architecture to break down those architectural dependencies and actually work out what the graph looks like to migrate those components one at a time to cloud and put plans in place quarter by quarter year on year for the next five years in order for those business lines to commit to that to understand how they do that in order that the centre of the house can track their progress on a name them and shame them but we can't wait five years and find out we didn't do it, we forgot we have to track their progress on a frequent basis and that's where dashboarding comes in that's where having a different type of proactive program management really becomes very very important so that's really the governance piece now there's some psychology here that we should probably talk about so we talked about the maturity curve the adoption curve you know, a little bit like this right so clearly it's always innovated as early adopters the vast majority clearly jump on on board the bandwagon towards the right hand side of the curve there's an emotional rollercoaster that comes with this so the curve at the bottom really represents that so the peak at the left hand side of the bottom curve really is an awful lot of enthusiasm a lot of selling that happens early on in this kind of initiative and it's very easy to mismanage the expectations across the entire enterprise so everybody gets very very excited as soon as that's over you have another WTF moment it's like I'm not doing this I don't want to do this why they make me do this if I keep quiet they'll go away there's a lot of very very negative reaction that happens where it says disillusionment there someone's forcing me to do something I just want to be left alone do my job why they're making me do it so the industry typically calls this the value of death it takes a very long time to work through this the point where you can realise those expectations you need to retool the entire enterprise you need to start working up the other side of that curve you need to have people see the value of what you're making them do and it's not until you get to that point where they're beginning to realise some of that initial promise and they get past a number of of their initial fears or expectations that you actually start to realise the value and that means that you actually started to mature the product quite well as I said you've got an independent program to run the adoption process people are not so much questioning it they're actually providing feedback into product strategy so you just get through that value and it's crucial that you keep your early adopters very very engaged if you lose your early adopters through that process you're screwed and the psychology of what we're trying to do is very important and it's very easy to underestimate the impact of that emotional roller coaster okay now the last bullet point clearly if we put a couple of bullets on there for private cloud versus public cloud I'm guessing it's at least a five year journey clearly by introducing private cloud we're moving the entire organisation in one direction as I said earlier we're addressing some of our application design issues of our latency sensitivity but I think it won't be until we reach that enlightenment phase that people truly within the bank and across the industry will be ready to move from private to public and maybe five years, ten years down the road why would we have a data centre moving forward so we talked about moonshot we talked about testing our rocket on monkeys I think preparing for space is following on from that emotional roller coaster at the psychology side clearly we started off at the far left hand side with some terrified individuals across the entire organisation we spent a while finding them wrangling them forcing them to attend meetings forcing them to be trained clearly we created an environment where it was relatively safe to play in the centre picture clearly you are my first VM we educated people continuously we trained 1500 people as I said through the first three four months of this year and that wasn't just one training course that's multiple training courses we run bi-weekly open mic sessions we push out bi-weekly FAQs it's a constant effort to train the organisation of this size and as that snowball runs downhill it gets harder and harder and people forget there's a learning curve but there's also a forgetting curve and it's a constant battle to keep the entire organisation engaged to make sure we don't lose those early adopters to make sure that we generate and keep the excitement that we need to keep this project on track and remember it's a five-year project as I said we still clouded in for five months last year we'll have the entire production thing up in July this year that's within a year of the board agreeing the investment that's a huge achievement but the engineering job is 80% complete at that point so we need to take the platform forward at least to mature but the job of adoption continues for much, much longer so I'll have moved on, I'll be solving other problems but the people managing the adoption will be there for a very long time ok, I think we're almost done so in terms of next steps we're working through a little bit of design for regulatory reasons so putting credit card information in a virtual environment is a little bit complex we're extending the cloud into our DMZ which makes it a little bit less elastic to be honest, clearly you've got a vision of a single cloud when you end up zoning it across developments, siths, bat, pat, production and disaster recovery environments and across web application and data tiers then we're going to end up with 12 zones in our cloud or with physically dedicated servers so it isn't a single cloud it's 12 clouds at that point a bit more complex than we originally imagined application onboarding I've spoken to scale is something that clearly we need to deal with having a few having hundreds of applications on the cloud very, very different from the entire application portfolio of 4,000 don clouds so scale is something we're going to be working through for a while Docker I've mentioned and we're hiring so we have vacancy you guys are here and I've got bananas so that's the end of the presentations so if any of you have any questions I'm sure Vasil, Queen or I are happy to answer any of them Hello, I was wondering what were some of the critical mistakes that you made along the way and then what did you learn from those mistakes because nobody gets it right the first time I think going back to the curve that we saw earlier the initial selling I've avoided sales all my life it's very easy to get an entire organisation excited, motivated and to raise expectations through the roof and I fought against others in the bank that went a little bit too far in that direction, I lost but clearly I then get the challenge of working through that early miscommunication or that sales process and correcting so I would always fight against that I think it's better to start quietly to develop a solution to allow it to mature a little bit then start selling it that's just me I totally get the way the banks work but I would call that a miss I think in terms of technologies we're trying to dramatically reduce our catalogue we went very carefully through the selection process for how we wanted to build the past layer and as I said we selected seven technologies I'm trying to gate the new technology introduction process seven technologies that's okay we reduced 50 to make it work that was unintentional so we've been back-doing technologies and avoiding the process that I run on behalf of the enterprise in order to get this project done I think the second thing is clearly by nature I don't like planning on a right to left basis when my boss gives me a date and tells me to get it done it's not a great feeling I'd always prefer to plan on a left to right basis to optimise the hell out of it but at that point the date is the date so I think not mistakes but in terms of things I would do differently I think those are they if I were allowed to run this in a slightly different way there might have been a different result but almost certainly it would have taken a lot longer anything else? the scope here is you're a bank and you're a big player and you've taken a very comprehensive stance can you share what precipitated the spark that started this journey because you're going looks like you're going full out with the moonshot I think there's an appreciation for how dramatically we'd grown as I started with but how our project centric focus really had driven that explosive growth but it almost prohibited any kind of automation and any kind of standardisation when you're making decisions so transactionally and that's introduction of new technologies that's typically throwing people at testing, overlease management then I think clearly that's a great way to build a business transactionally in a siloed kind of approach 20 businesses, they're all entirely independent but to a certain extent some of them are building the same kind of software the same kind of solution, solving the same kind of problem so it gets expensive and it gets fragmented it becomes fragile it gets harder and difficult and costly to maintain the appreciation for the way that we've developed software the way that we've grown as I said waterfall style over the past 10 or so years can't be the way that we move forward every talks about agility but achieving agility in the environment that we had was never really an option but where was the aha moment that you actually said we've got the problem everybody understands the problem or I've set the table now the solution is cloud give me whatever it takes to go there it was actually a complaint that happened a couple of months ago whenever we had the process whereby it took 30 to 90 days to actually get a server in the bank built and configured people kind of accepted that there weren't a lot of complaints on a day to day basis around that then a couple of months ago we started attracting complaints from developers why was my VM taking 15 minutes to start didn't anticipate that that's a hard problem to solve but that was the moment that we realized that we've turned a corner and we're engaging the community in a different way sorry I just wanted to back up to the actual that's the cloud now but you're saying the way we're doing it now ain't gonna get us there I gotta do it differently here's my plan I'm trying to find out were that spark that kicked off that journey happened so the journey happened in a way described it was really a time to market play now as we were building the business case I mentioned clearly the bean counter turned that into a cost play and we struggled with that for several months how on earth do you demonstrate you're gonna save money through building a cloud it's not cheap to build the change the bank aspect of it running a project the work might move around a little bit but most of it is still there when people are waiting they're doing other stuff they're not spending money waiting as you get into the operational part of it they run the bank part of cloud then you can see some savings there our utilization on physical hardware was horrible it was 25% even on virtuals we were building it was more like 50 if we can realize something like 75% of resource utilization on cloud then that's a savings we can model that but it's about working through that process and it's about understanding how you automate it what you automate first how to realize those savings so as we went through this journey the cost dimension became much more important than the time dimension and that was a surprise you kind of touched on my question already but it's clear that from the top down that your organization gets cloud and I'm wondering if you've had a chance to turn your thoughts to bottom up kind of stuff encouraging the users to be good at cloud citizens with say a billing system or whatever so that they turn their instances off when they're not in use and you can really start gaming the system like you do with Amazon that's a great question so I think Vasil touched on billing but moving to a self-serve model we predicted 10,000 VMs over five years so we stood up 2,000 in December or at least capacity for 2,000 by by April we'd pretty much run out of space and that was developers playing in the environment and some of that was legitimate project execution some was trial and error as I said we created a safe place for them to experiment and a lot of them were running performance tests but very few people were behaving in a responsible way now historically when you're buying a server you pay for it for the full depreciation cycle the server you pay for for three or five years in a cloud model you pay on a utilization basis now for us that means right now the minimum period you have a VM for is a month because we're working through maturing some of our financial processes you don't pay based on the minute you consume you pay for monthly units and that means that developers aren't motivated to rip down the VMs very quickly and that's a behavioural change we're having to work through the organisation now as we mature our financial process we'll get that down to something that looks very like a phone bill you use this many minutes per month here's your invoice and we'll recover money much more frequently the challenge that creates though is that banks don't typically have a mechanism for recovering the cost of unused capacity businesses pay for used capacity now getting them to the point where they're happy to pay for unused capacity we're carrying in the cloud that fluctuates on a real-time basis that's a completely different change the reason we're charging for monthly units right now is it's a small step we know where we need to get to it's just going to be a while we've been told we're out of time so I think you know where I live so any questions drop me an email over seal at riskfocus.com more than happy to help you through whatever challenges you have or tell you some more about the problems that we solved and how we got to where we are ok, thank you