 Good afternoon, everybody. My name is Kathleen Mackie and I'm an engagement leader at EduServe. With me is my colleague Colin Blake, who is one of our solution consultants. Today I'll be providing a summary of the findings in our joint report with SOCTIM titled Local Government Cloud Adoption in 2018. Colin will then delve a little bit deeper into the technical perspective of the journey to the cloud. You'll notice on your control panel that you have the ability to ask questions. If you have any questions as we're going through, please feel free to submit them. We'll come to all the questions at the end. So for those of you that don't know us, EduServe is a trusted technology advisor and ally to the public and third sectors. We work with our clients to help them get the most out of public cloud through our suite of professional services, managed services and application development services in partnership with Microsoft. We help clients to understand their infrastructure, build a business case, define a strategy, migrate to the cloud, optimize their use of public cloud once there and develop digital services using the tools available. We believe in knowledge transfer, helping our clients to become skilled in delivering their own services. As a not-for-profit, we are free to align our objectives with our clients and are able to reinvest our profits back into the communities that we earn them in. We do this through our education programs or investing in community projects. One of the main ways that we do this is through the executive briefing program. The EBP has been conducting research and connecting the sector over the last three years. Our recent research includes skills for digital change with the PPMA, which is the Public Service People Managers Association, looking at the cultural impact of digital change, engineers for change in partnership with the CFG or Charity Financials Group, looking at the role that financial directors play in digital change. In the same vein, we are also running a project with SIPFA, the Chartered Institute of Public Financial Accountants, which brings us to the report that we have worked on in partnership with SOFTIM that I'm here to discuss today. The project ran from November 2017 until February 2018 and was made up of two main parts. Survey responses from 373 UK councils covering a range of information and in-depth qualitative interviews with 11 IT leaders representing 15 organisations. These organisations have varying levels of cloud adoption and a range of perspectives around the future of local GovTech. So through the interviews, it became clear that the main drivers for cloud adoption were resilience and flexibility. Our IT leaders spoke often of data centres that were really not much more than aging hardware in a shed. Maravon Hassell from Ailsbury Vale spoke of the increased resilience of standing on the shoulder of giants such as AWS and Microsoft and having accessed the state-of-the-art data centres that have millions upon millions of pounds invested in both the hardware and the physical buildings. Something that would have never, ever been possible in the district council before. Added to this are the security measures taken by the hyperscale public cloud providers to satisfy the needs of their defence and intelligence customers that becomes the standard service for all users. Rob Mzakiva from Enfield Borough Council spoke about the flexibility of being able to turn things off when not in use and spin up new service or autoscale to fit demand. This is a huge step change where historically capacity hoarding was essential to meet the unknown future demands in the estate. In the new world, capacity is waste as you only use what you need and you only pay for what you use. These factors combined mean that the councils that have begun to adopt the cloud have the opportunity to enable a more agile workforce to develop and deliver better services to the citizens that they serve in the way that they expect and deserve. Roy Grant from York City Council said it best when he said that what we have is a large oil tanker, but what we really need is a flotilla of smaller boats that can move in any direction at a moment's notice. We asked all of our contributors what they thought success looked like to them and Omid Sharaji made an important observation. He said, and I'll quote him directly, For us, success in IT would be when nobody is having a conversation about cloud. When the dialogue in your organization shifts to asking, is our portfolio the right portfolio? Are we designing services around citizens and exploiting the tools that are out there in the marketplace to do that? Is the data flowing through the organization in a way that is consistent and enables us to get new insight to automate? When we are having those kind of conversations and you are not talking about cloud, it's just a delivery mechanism. That is what I think success looks like. We can say with a degree of certainty that the majority of IT leaders in the sector believe that the question surrounding cloud is not if, but when. We wanted to find out what was holding organizations back and a few key themes were repeated time and time again. The first of those being cost. The impact of cost on decision making was largely based on two factors. The first of these factors is the shift from CAPEX to OPEX. We heard from our contributors that the effects of austerity have been felt most severely in the revenue budgets and that this made it difficult to build a business case. As I mentioned at the start of this presentation, we are currently working in partnership with SIPFA. So I took the opportunity to pose the question to the chief financial offices on our panel. The answer that they gave was somewhat surprising. The reason that business cases are failing is not based on the CAPEX OPEX debate, but rather on the cost-saving focus of the business cases submitted. The business case for cloud is not a cost-saving business case. It is so much more than that. It's about providing an environment that enables the business objectives of an organization. The second blocker related to the cost is also closely linked to this idea. We heard that many organizations have completed like-for-like comparisons between their on-prem estates and the same estate on cloud. There are two considerations for the argument. Steve Achella told us, I have never known a facility's team who can tell me how much our data center costs to run, let alone are able to help with the calculation that we might save if we moved out and sold the building. The significant proportion of the costs of running a data center are bundled with other costs that make it difficult to fully understand total cost of ownership. The other consideration is that you will be unlikely to need to move your entire estate to the cloud. To quote Steve again, The challenge for us is we identify what we can get rid of, what we don't want and get it to the right size. Furthermore, work undertaken to rationalize your estate prior to migration will decrease the cost of change in the future. And as more and more organizations move to the cloud, we would expect the cost of hardware and licensing to rise in line with the declining demand. So when you are making a business case and looking at cost, it makes sense to look at the return on investment two to three years down the line and not just a TCO comparison. The second major blockade to cloud adoption spoken by our IT leaders is legacy IT. Many councils are locked into long term contracts for specialized services. This would not be as much of an issue if the suppliers were moving more quickly to provide cloud native or at least compatible replacements for their services. Councils rightly want to get the most from their investments and are reluctant to reprecure procedure reprecure a solution before the contract completion. Even when the services do become eligible for reprecurement, there is still a delay in finding a cloud native solution from vendors. Simon Hughes from Mid Sussex Council said, if there isn't a good market in terms of supplier choice, we aren't moving. At the same time, we are also moving other areas of our IT estate towards infrastructure as a service. So we have less to worry about in terms of maintenance updates and patching. This is one approach and Colin will speak more on the other options available shortly. The last major blockade to cloud adoption from our panel of IT leaders surrounds culture and skills. I have chosen to use Marivon's quote here because their approach was very much on one end of the scale. Ailsbury Vale District Council put all of their staff through assessments to determine their fit within the culture that was needed to embrace the council's trajectory to become tax free by 2023. Only the staff that matched the behaviors and attitudes needed to make this goal a reality remain part of the journey today. This approach may not be for everyone, but is often said the culture eats strategy for breakfast and for good reason. The most successful transformation programmes will engage everyone from members to data protection and governance colleagues all the way to frontline colleagues at the earliest possible opportunity, providing education every step of the way. IT is no longer a support service for the business, but rather how business is done. We all work in IT. In other words, there is a question surrounding the changes to the traditional IT teams. We asked our IT leaders how they thought that these roles would develop. Gareth Paulett from Cheshire said it's true that move to the cloud means that we won't need to do everything that goes with managing our data centres in house. And yes, part of our business plan is to release expensive contractors. However, we still need people to manage plan and deploy IT. Any capacity that we can free up is resource we need to take the organisation forward so that we can act as business partners as we undergo further change. A sentiment that was echoed amongst the group was that colleagues who have been used to fixing broken things and stroking tin will be able to focus on higher value work that contributes to the business objectives in a quantifiable and meaningful way. We know that there is a skills shortage in the public sector and providing these types of opportunities will go a long way to ensure that you can attract and retain high quality staff. So in summary, our advice and the advice that we got from our panel would be make sure that you start with your business objectives. It's not about cloud. It's about the business objectives that you're trying to meet. Work with your finance team to understand the business case. Start early, engage with all levels of the organisation as early as possible, educate where necessary and bring everybody along for the journey. Find a partner that will transfer their knowledge to your team. This is vitally important because once your partner leaves, you need to have the skills to maintain your services. A phrase that is often heard in our office is that while the technology piece is complex in many ways, in comparison to the issues that I've spoken about here, it's the easy bit. So I'll hand over to Colin to take you through the easy bit. Thank you, Cathy. Just a little bit of background on myself about myself. I was working as an engineer in the field up to six months ago. I was working on a large scale, blue light deployments, police force in the Midlands. As part of that, I did a migration to the clouds with a tour jaune and a lot of loads there. And during this process, I was hit with the absolute certainty that the cloud was our future. And as such, I got re-certified and changed my job role within a couple of months. And I found myself here very much an evangelist of the cloud. We EdgeServe have spent a lot of time mapping out journeys from on-premises platforms to the clouds. And the reason why we have done this so many times is we had our own private cloud ourselves and recognising the breadth and depth of the services on offer by the big players and the rapidity of change we were unable to effectively compete. And so we made the decision to move ourselves to the cloud. And being a trusted tech ally of our customers, it would be disingenuous of us to move ourselves but not to move them. And as such, we've migrated them to the cloud as well. So we've been through this process many times, many different application servers and cloud providers. And in our experience, the journey follows three distinct phases. Discovery, migration and optimisation. Discovery is possibly the most important part of the whole process and the most difficult to get right. We were working with a local unitary council and we managed to get contracted to do the discovery phase for them. There was no effective documentation. SCMDB or Configuration Management Database was all over the place. They didn't know their servers, they didn't know their licensing. And as such, if you don't know what you have, how do you plan to migrate it? There were so many unknown facts we had to have a mix of approaches. Firstly, he started with workshops. He just gave us an opportunity to speak to the people that work with the platforms and know them the best. This is an invaluable source of knowledge and should be overlooked. There's good opportunities to sell the benefits of the clouds, what they're going to gain, they're going to free themselves up and be an opportunity to do more interesting and meaningful work for the organisation. Or a layer of fears, in fact, as a lot of people think, as their servers go to the clouds, that's where their job goes as well. And they'll be left high and dry. On a side note, I was looking on a website just recently and it was, will I lose my job to a robot? And people with cloud and IT skills were in high 90% unlikely. So in fact, getting into a cloud and obviously job security. Regardless of the tooling, there's many tools, including Azure Migration Center and AWS Migration Hub. The players have spent a phenomenal amount of money making it as easy as possible if you get your kids up into the clouds. They want you to be hosting off them. And that is to identify cost sensors and applications. It helps you as an organisation see who are the biggest consumers of IT in your organisation and then reflect that in costings and budgeting and things like that. It also helps highlight where SaaS offerings may be more effective than physical deployments. Talk about Beryl and Bob in finance, having their own server, their own sand, electricity, power, backups, offside backups and all these for just a couple of users who have a SaaS application software as a service out there. So simply retire the platform and physical platform and move them to a SaaS. Offers, this also allows us to offer comparisons of analysis between public clouds. There are so many variables and these are changing all the time that the marketplace is incredibly fluid. They're very, very competitive. They are keeping each other honest and in real terms the price is being driven down. And if one supplier gets an advantage, it's always very short lived because the other will pull them in. We're talking about AWS and Azure but it's also Google Cloud and so the market's becoming more and more competitive. And the other thing to consider is no matter how much work you do on your pricing, it is a snapshot of that time. And from that point forward, like an uncontrolled printed document, it starts getting more and more out of date. So it's easy to get your information together, look at all the different variables, make a decision and then decide to go with it. Finally, the discovery phase can be used as a blueprint for your cloud migration. It allows you to traffic light all your applications. Yes, no, maybe. Is it gonna be IaaS, infrastructure or service, PaaS, platform as service or SaaS, software as a service. You need to determine all your interdependencies between the applications. If you lift up your web front end away from your database and you have one part in the cloud or one part on-prem, the traffic up and down and latency is going to be expensive and produce a poor performance. And finally, this allows you to finalize your order ensuring your dependencies are addressed. As I said, this is a print and it's a living document that you use in discovery. Then go into design and then it just keeps evolving all the way through and eventually this should be the document that you have that has a blueprint of your cloud deployments. In our experience and understanding the implications of licensing regardless of which cloud provider you use are significant and it really does have an impact on the affordability and return on investments. In fact, there are many organizations that have full-time professionals just working on licensing. solely and nothing else. There's license mobility. So in the simplest form, mobility is moving the license from physical kit that you have already in your data center and applying that up in the cloud. A really good example is SQL. Most people have SQL. You have to license the OS, license the application. You're doing it per core. You're paying a lot of money for it. Your SQL server has to be able to handle the highest load of thrown at it. And as such, it needs to be a fairly robust system. You can host your SQL in the cloud using an iOS deployment and your SQL license in is fully mobile and that's across both clouds. The Paz offering, which is a platform as a service varies between the clouds. So AWS have a pricing system which is licensed included but it's sure you can use what's called a hybrid juice benefits, which I'll cover in a moment. Network, virtual appliances, firewalls, load balancers similar objects or appliances. Very much most of the big players now are offering mobility of licensing or hybrid deployment. So you can just lift the licensing straight off your physical device and put it on a VM in the cloud and then you would have the same functionalities you had before. Generally speaking, server OS is not mobile but there's one exception which is enterprise agreements and software assurance from Microsoft. These allow manageable volume licensing program and the main benefit of this is hub as I alluded to earlier. Hybrid use benefits allows you to use your licenses for Windows standard and data center to be migrated to the cloud. You can then run your Windows servers as at the same price as Linux servers and effectively recoup the cost of the OS. So in this Unitary Council, where we've just been at before, they had a SA for 1,532 calls. This allowed them 191 VMs with eight calls and 28 giga RAM to be licensed and reduce costs. These are really about servers and the core OS systems and they realized a significant saving. Okay, once you complete the migration phase, sorry, once you complete the discovery phase, you move into the migration strategies. These are all, can be implemented to migrate your platforms to the cloud. The strategies were built on the five bars by Gartner in 2011, but AWS had an additional few years later and this is a process that's run after discovery. It succeeds only by having certain clarity around interdependencies of services as I discussed previously. So in the first instance, we have Retain. This essentially do nothing. Your application isn't cloud ready and you only migrate what makes this sense. In all cases, I would say change for change's sake is to be avoided. Retire the application or decommission it. Each application owner needs to honestly state is it still necessary? Now some people would have been working with these for many a year and they hold a soft spot for them but this is a time now to say, is it really relevant in the current marketplace? Rehost or lift and shift. Arguably the most common migration at this time, you're lifting your kit straight from your data center and putting it into the cloud. It is easier to make changes once you are in the cloud and metadata has been gathered. Replatform, which is optimizations. So this is making changes in flight. Essentially lift and shift with some changes as you go. Again, I would say it's always easy to make your changes once you're in the cloud and changes for change's sake should always be avoided. Repurchase or buy SaaS. Here's a product out there that already meets your needs. Do you really need exchange in the cloud or should you use exchange online on Office 365 with all the support that it has? And finally, rearchitect. This is creating your platform from a green field deployments. It needs to be cloud first in nature using decoupled dependencies, making it resilient and scalable. So you've discovered your estate and you've classified it and now you need to get the data into the cloud. This is the journey. There's multiple tools for migration. There's native tools which each cloud provider have. They're always striving to make your journey to the cloud as easy as possible. AWS server and migration tools are very good as is the Azure site recovery. You can simply replicate your VMs from your data center into Azure. There are many third-party tools and these typically cost per VM and they may offer you additional functionality such as rollback after deployments. Or you may choose to use a partner with experience for migrations. In many ways, once you're in the cloud, it's just a start to your journey. The iterative and continuous improvement of your state is a goal that's always to be achieved and this allows you to gain maximum throughput, resiliency and efficiency for every pound you spend. You're really looking to squeeze your pounds and get as much process and power as you can. In the first sense, you monitor your platform when it's once it's live and use sources of information to gain metadata, CPU, RAM and utilization of VM. Errors and latency may be on an application. Then you look to make changes through informed decisions about what you'd like to change. You can scale in or scale out, which is additional or reduction of identical instances on the platform tier. You can scale up where the instance size is increased and within a single tier. Or you can move to or from IASL, PAS. There's many choices that can be made. Once you're there, then, you need to test all those and make sure that it's working. The metadata that you highlighted in the first instance, you need to test again and make sure that you made improvements to the service. If you've made changes and it hasn't improved service, you really want to roll back as you're making more changes onsites by not improving service. Finally, we have a look at the different model types that we have in the cloud. As you can see here, IAS gives you more control, but it's more complex, whereas you have less control and less complexity with SAS. IAS treats the service as cat or not pet. There's no more stroking tin and hoping the next hardware changes are fatal. If an IAS resource is not performing, as expected, simply tear it down and rebuild it. If you have a specific configuration that can't be supported further up the stack, you want to look at IAS. You remain responsible for the performance and management from the OS level up, and it's vital that you have the workforce to support this. An ideal candidate for this would be like a line of business SQL server with dedicated configuration. Moving up the stack, you might look at Paz where you have a shared management model where you simply consume the processing capabilities and relieve yourself of the management overhead. SQL is a managed service where you deploy your databases for your data only and applications consume that. For instance, you can have a single instance or a managed pool. It's tempted to use pools in most cases because you get shared resources, but you could get contention leading to poor performance. So for instance, a SQL database that runs overnight and one that runs during the day would be an ideal one to have on the same environment. And finally, on SaaS, you completely move away from management and the user only consumes the data of the resources. You have the least amount of complexity but also the least control. Essentially, you get where you're given, but it's ideal for services where you don't have the internal capabilities. And then we look at sort of reserved instances. So going back to the previous discussion, we talked about the load of business SQL server as a steady state, 24 hours a day, the load is fairly continuous. And as such, you see the load here on the green line. You need to over time get some metadata about where your peaks are, no matter how shallow they are and make sure you have an instance that addresses that. And you pay for that over a one year commitment or a three year commitment and you get a reduction in price based on that. And that can be seen here. So on the left, we have a model of pay as you go and these I ask load costs of 50K in the first instance, with a one year commitments to a reserved instance will drop to 40K. And then over three years, it'll drop to 30K. The one issue that needs to be addressed here is you need to be certain that your load isn't gonna drop over time because if you have a drop in load, then you won't get the same cost effectiveness by having the new I ask instance. And also then we look here, auto scaling. So this is where you respond to erratic or burstable loads that's gonna be represented here on the green line. In your data center, you would have had to have had a box, a square that would have covered the peak of the loads and then that would have been there for that entire time. So much over capacity. So what we look at here is looking at a smaller instance that covers the 95% of your loads. Checking metrics of the application or VM. As soon as you get to peaking loads, you add an additional instance and that is called scaling outs. And we can see here that we follow the bell curve of the line and add additional instances as the metrics are triggered. Equally on the downside, as it calls off would be removing instances as well. This allows you to get the most throughput but agile response to loads from your I ask platform. So in summary, your results will only be as good as your discovery. Without a clear understanding of your starting place, you will never reach your goals. Use all the tools are available, especially talking with people. If you can sell the move to people early, the whole process will go much more smoothly. Licensing, it's very important to understand your existing profile and make efficiencies where possible. License mobility has real benefits. It is worth bearing in mind that on-premises may be more viable in the future as the marketplace changes. Microsoft may more aggressively markets clouds licensing and when you need to license something in your physical data center, you could just put it down from your software assurance. And management, this is a never ending process. The market is fluid and always changing. It's always good to have a trusted partner that will help with horizon scanning and optimization suggestions, thereby ringing out as much as you can from the money you spend. Thank you, Colin. I hope that that's been useful to everybody so far. We are going to take some questions. So if you wanted to type in your questions straight away, we'll get to them as we're going through. I think we have one question here. So the question is, I have used lots of data centers in the past and it is extremely difficult to get your data out. What is stopping the cloud providers from raising prices and making it impossible to leave? Do you want to take that one, Colin? Yeah, there's nothing that we can predict in the future or give a definitive answer here, but I will just go back to where I said before about the competitive nature of the marketplace. The way that cloud works, you would move your infrastructure to the clouds and then you would have everything coded as infrastructure as code. That makes the process of moving away from a cloud provider much easier once you're already there. And the other thing to remember is that your skill sets in the future, if you were to remain in a data center may become so obsolete in the marketplace that you wouldn't be able to have the workforce in place to make any changes even if you needed them. So in my opinion, although we can't predict the future, we can predict the competitive nature of capitalism and as such prices should remain competitive. Great. So we'll give it another couple of minutes just to see if any other questions come through. While we're talking about that, I think it's probably good to note over here as well with that last point that one of the best things that you can do is you can find a supplier, a partner that's gonna work with you that'll help you to rationalize your estate so that the cost of change will be hugely reduced. I know from our perspective, what we do is we provide our customers with infrastructure as code, which means that although we hold the IP for that, you can take it away and reuse it as you need to. Just another couple of minutes. Any other questions at all? Yeah, if I could just expand on that a little bit perhaps. So as I said in my section, once you get to the cloud, that really is the start of your journey. You have an iterative and continuous process of improvements. So you have the monitor change and test. And as you go through that, initially your changes may be fairly large, but over time you'll just be refining constantly. And as new products and services come online, you need to look, can you get more for your money? Can you get more throughputs, CPU or RAM on VM instances? Or is there a new service? So for instance, moving from IAS to PAS, that would give you the greater efficiencies that would allow you to give the same service but at a lesser price. Thank you Columns. Right, I think it was probably quite a lot to digest. So we've got my contact details up on the screen. If you have any questions at all, then do feel free to either contact me by email or give me a call. Everybody that's been on the webinar today is going to get a recording of this webinar, so you'll be able to re-listen to it. You'll also get a copy of the report that we were talking about. I suppose that leaves nothing, but for me to say thank you so much for your time. We hope that you found it useful and we'll be in touch. Thank you.