 I'm going to do a talk that the first thing that you should do is have very clear in your mind the first sentence or two that you're going to say because that helps you get started and get some momentum going. I've been standing here thinking about that, planning to start with good morning, just realized that would be wrong. I'm very confused about time zones at the moment because I've come in from London, I did a talk online for some people in Peru in the middle of the night and now I'm back on Indian time. So if I go a little bit off track, please forgive me. Some of you will have seen my talk yesterday morning where I spent some time telling you about the journey that I've been on over the last few years working in the UK government. For those who missed that, I'm a software developer by background, spent some time doing product management roles. About seven years ago I started working for the British government where I helped set up an organization called the Government Digital Service. We were responsible for leading the digital transformation of the UK government. We worked right across government to change a number of policies to deliver new services, save a lot of money and bring in a lot of new skills. That culminated for me in a period where I was the Deputy Chief Technology Officer for the UK government and now I'm independent providing consulting and advice around digital leadership, technology and quite a lot of the time security. We're all here for a session about cloud security and I'm doing another on Sunday of a much longer workshop. Why do we worry so much about it? Why are we even here? Last week some of you may have seen some news that github.com came under attack from a distributed denial of service attack that used compromised caching service to send 1.35 terabits per second of traffic against that service. At the time that was thought to be the time being last week. That was thought to be the largest attack of its kind in history. Since then reports have started to surface that actually in the last few weeks there have been larger attacks. That's significant because the scale of that sort of activity is increasing significantly and rapidly. It's also significant because some of these attacks are against what has become significant parts of our global infrastructure. Many, many organizations depend heavily on github as part of their continuous deployment pipelines. We probably should have designed some resilience around that but a lot of people haven't and so these sorts of attacks have all sorts of effects across a much wider system. Also seeing more and more high profile data breaches. Particularly over the last year we've seen large amounts of credit history information being leaked. We've yet to see the full impact of that with the degree to which people can make use of that information that's coming out to cause problems for individuals around the world. But there's certainly a huge opportunity there as those things come out. And from an individual and business perspective we've seen senior executives having to step aside from their roles because of the embarrassment that these situations have created. And we've seen new questions being asked about sectors of the economy. So in the US government conversations were triggered about the role of credit reporting agencies after one of them was compromised and their data was leaked. Again we don't quite know where that will go but if you're in that kind of business that sort of scrutiny is quite unwelcome. From a system perspective maybe it's a good thing but that's another issue. These are challenges that wouldn't ordinarily be expected when we talk about security. But they're raising the level of worry, the level of concern and probably the level of impact. And at a more day to day level there was a report released recently looking at cybercrime over the last year which reported that 130 billion pounds was taken from consumers through various types of cybercrime in 2017. And I'm not very good at how you describe very large numbers in India. I know you have a number of different terms for that. So I've not tried to translate that into local currency. I expect you'll all be aware that's a lot of money. And it's worth emphasising this bit that I've called out to involve. Most common crimes are generally low-tech. A lot of the situations which have led to this level of loss have been because of social engineering. They've been because of phishing emails which get people to click on a link and disclose some credentials. Well they've been because of well understood security issues that haven't been dealt with properly. Like SQL injection attacks. Things that we've known about for a long time and haven't dealt with. For a long time that was not okay but you could probably get away with it. And increasingly as the scale with which people can attack us has increased. That's not the case. So we need to start taking these things a lot more seriously. A lot of my perspective on this comes from that time that I spent in government where security became a particular area of focus for quite a few of us. And I was the pause and reflect on that a couple of years ago before I left government when I was in the process of trying to hire somebody new into my team. I was telling him about the work that we did and some of the challenges we face. And he turned to me and said why does everybody who works in government become obsessed with security? And that threw me for a moment because I didn't feel like I was obsessed. It just seemed important. But it made me reflect for a little while on what was it that had led to that attention. And there were two things. One was that as we looked at the challenges in transforming government and providing much better services to end users, to citizens, we started to talk about what we call the square of despair. That was four things that were generally big blockers to change. One of those was governance processes and finance. It was very hard to get the right amounts of money at the right time in the right way. Everything was designed to a very large amount of money, obtained over very long periods, based on false certainties. Another was about the procurement mechanisms, the way that we bought services, which again were designed for very large projects for old, slow moving suppliers, not the more nimble, small companies that we wanted to work with. A third was access to skills, which is related to that second one, with hard to hire the right people. Our processes weren't designed to get in the people we needed. Fourth was security. We had a huge amount of ceremony apparatus and required processes that were designed to try and help us make what we were doing more secure, but they didn't really help. They often slowed us down. And they didn't let us focus on security as being an essential component of what we were doing. And where security is really important is to call back to something I talked about yesterday. The primary currency of governments, at least democratic governments, is trust. And you win a lot of that trust by demonstrating competence. It's excessive failures. It's the drip, drip, drip of failing services and embarrassing incidents that erode that trust and slow you down from making things better. But that's not just the government thing. That's true of anybody who needs to have customers. Customers increasingly have options. And if you don't demonstrate your competence and if you don't continually win their trust, they're not going to stick with you. And that, for me, is the perspective that I bring to thinking about security, is what do we need to be doing to continually receive the trust that's needed to make progress in whatever we're doing? And it's very easy for that to be undermined. Another incident in the last couple of weeks was when a JavaScript library designed to make websites more accessible was compromised and a crypto miner was added. So people were able to make a few Bitcoin from any visits to a website that included that library. Not a huge impact. People's electricity bills might have gone up a bit, their computer might have been a bit slower. It wasn't a key log or anything else that could have been in there or could have been capturing a sense of information. One of the sites that was affected by this was the Information Commissioner's Office in the UK. And that's the body that's responsible for data protection for making sure that organizations have good data security policies. It's hard to look at them with quite the same respect when they've been affected by something like this. And they've been caught out with one of those basics. And that's going to make their job, which is really important, harder. One personal experience was a couple of years ago, some of you may remember, a very large denial of service at Aquacains, Dine, the DNS provider, which took down quite a lot of online services. And it affected me and my team because we were using it as the underlying DNS provider for a number of government services. So I'm just going to pause because my battery's running out and power's not coming through. Is there somebody to just have a look at why this isn't charging? That seems to be getting power here. What we realized was that when we were reflecting on this challenge was that we'd known all along that DNS is a distributed protocol. You don't want single points of failure in your systems and that we ought to have multiple providers. It'd be hard to prioritize work on that. But also our security practices had required us to go to so much effort to review any DNS provider that we had. But it was way too much work to have more than one. A set of practices that were meant to make us more secure have made us less resilient. But practices had to change. Okay, so that's security generally. What about this cloud thing? And while I'm here to talk about cloud security, I think one of the main things that I want to say in this context is that most of the things we're dealing with are not specific to cloud. If you look at another of the major incidents that's affected the UK in the past year, it was the WannaCry malware, which infected lots of machines across our national health service, meaning that appointments in hospitals had to be postponed, operations had to be postponed. An already struggling service was put under significantly more strain as for an entire weekend, hospitals couldn't operate. That had nothing to do with cloud. It had a lot to do with old Windows desktop computers that hadn't been patched. But still, the thing that most people are worrying about when they talk about security at the moment seems to be cloud. Why is that? Well, to a large degree, that's because cloud represents change. And change is always scary. As humans, we have a bias to think of any new risk or newly exposed risk as significantly more problematic than what was there before. And so when you throw something like cloud into the equation, and all of the other practices that are co-evolving with it, so agile and continuous delivery, and a lot of the other things we're talking about at this conference, it's not necessarily that they make anything more risky. Perhaps, as we could certainly argue for quite some time, they make things less risky, but they shake up the way we've been thinking. We've all, for a very long time, had complicated supply chains underneath what we do. Very few people fabricate their own silicon, build their own servers, write their own operating systems, but we all use them. We've always had some amount of shared responsibility, but when we move to cloud, we start to acknowledge that. So that makes us worried and it makes us scared because we treat it as if it's a new thing. It's really not. And in fact, when we move to cloud, we probably get better concepts and better language from dealing with that. Because instead of talking about a mash of different things that are thrown together to make what we refer to as monolithic systems, we end up with clear service boundaries and ways to reason about them. And most of our security issues have very little to do with cloud. Instead, cloud's a huge opportunity to rethink how we do a lot. Geico Mark Schwartz, who is the Chief Information Officer at our major US government department, until recently and now at AWS, has been writing some really good pieces on LinkedIn, not my favorite place to read articles, but that's where he's chosen to publish them. And he's talking particularly about, thinking about risk and cloud. His perspective is primarily commercial rather than security, but he addresses both. And looking at what's going on and the situations that many of us have inherited, he said that the risk of the new should seem negligible compared to the urgency of change. The threat environment, the scale of the threats that we're dealing with is shifting. We have to bring new tools to the table to deal with that. Accepting the status quo is just gonna get worse. But any sort of change of this sort requires cultural shifts, all change is cultural. And there's some steps that we took across the system in the UK I just wanted to touch on quickly. The most dramatic of them was that we took the security classification system, it's existed since World War II, which had six different levels. That's my battery going. And reduced it down to three. So those, by doing that, we took away a lot of the mechanisms and apparatus that existed around the way we thought about security. We threw it out. And we did that deliberately to create a vacuum to create a space where senior leaders had to think again about how they approached risk. And that provided a lot of challenges. But it meant that we had to reflect again about the impacts we were trying to defend against what we were gonna do about it. Similarly, we made a decision over a course of a few years this wasn't quick. It's not charging. I'll take another look in just a minute. We had been operating a lot of our services on a private internal network. And just like Google did with their BeyondCorp move, we recognized that actually a lot of what we were doing that was leading to bad practices. We created a comfort blanket for people. We said, here's a nice perimeter that all of our security exists within. If you're on this network, you're probably safe. Except with thousands of endpoints on that network, thousands of ways in and tens of thousands, not hundreds of thousands of people with access to it. Any perception that that gave you a high level of security was false. But it had allowed people to not implement good application security practices. To not implement appropriate encryption. We had to make a sharp decision that we were going to stop doing that. Treat the network as hostile, take away that safety blanket, force ourselves to think again. Now we get to the bit where I really need a slide. That's on. Sure, I think that means it's not plugged in. Okay. I'll talk about a few other things. I might make it a bit shorter. See how we go. I don't, I only have it on my head. Okay, well let's carry on, see how we get on. So as well as taking things away, we knew that we had to introduce some new ways of thinking about things. What we introduced was, how's it going back? What we called cloud security principles. So they should be on screen in a moment. That was 18 points that we thought were important to consider when assessing security. But pretty much anything, but particularly cloud. We used cloud because that was the new thing and people wanted some way of reasoning about that. There were fairly technical points and there was a lot of pushback from people saying they wanted to find ways to assess products. We said, no. This new world is about thinking differently about how we adopt tools, how we adopt services. We need a set of things that we consider. We don't need a checklist of what makes those things secure because securities are complicated concepts. So we introduced those. There's a number of things that could be better about them and I'll talk about that a little bit more later. Okay, there they are. But they give a fairly rounded view from quite a high level of the sorts of things to consider. And one of the really important parts of them is that they force us to think about the product or the service we're consuming and how it's used. So things like personnel security in there. If I've got to worry about the identities of my staff, how will I do that, that sort of consideration. Or governance framework. There is much about the responsibility of the person consuming the product is about the characteristics of that product. They can be a bit opaque. There's a URL I can share afterwards if anybody wants it. It's got a lot more guidance on how to implement these. It's on the websites of our National Cyber Security Center, so it's ncnsc.gov.uk. Along with that, we had to develop a set of improved practices. Now I've not got much time today to talk about those practices. You'll be able to hear a bit more about some of them if you come to my workshop on Sunday. But if you want to do some reading, there's two books that are well worth a look. Adam Shostak's Threat Modelling. Sort of a classic in the field at this point. For me, it's a little bit too focused on technology. I'll talk more about that later. But it's a really good introduction to a lot of the concepts that are important here. Or Agile Application Security that came out a few months ago through O'Reilly. One of the co-authors on this, Michael Brunton School, worked with me in government. A lot of the thinking in this reflects conversations that we've had over the past few years. So very much more his than mine. He's spent a lot more time developing that. So if we do want to start accepting that a lot of our security challenges, they're not specific to cloud. But they're there, and cloud is an opportunity to rethink how we approach them and how to change them. How should we start thinking about that? The starting point in any conversation should be to start to understand your context. And in a security environment, that's often couched under the terms like risk appetite. For me, there's a really important starting exercise of looking at the service line you're working in. Or if you're a relatively small organization, your organization as a whole, that can be quite hard in a large organization. We're just starting to reason about what are the things that are absolutely essential to us staying in business? What are the services that have, I'll carry on, what are the things that have to be there in order for us to provide the services people expect of us? What are the things that if they went wrong, would significantly compromise our ability to work or cause all of our customers to leave us? You can do that. You can start to lay out a set of security objectives. You can start to say, what is most important for us to protect and defend? Ideally, you can get that down into something you can express on a single side of paper. Or, you know, single view in an online document. So that's usually how you do it. And importantly, it lets you start to also reason about what will you not worry about so much? Have conversations with senior people around your organization about how you might prioritize this kind of activity. Once you've got a sense of that, you can move on to understanding and those who've done some threat modeling will very much recognize a lot of this. Identifying the key assets that let you meet those responsibilities. So that might be data sets. It might be commercial relationships. It might be services provided by third parties, not even cloud providers. It might be source code. It might be people. It might be buildings. It's all sorts of things. You can start to understand them and start to kind of map out the supply chain that lets you meet those objectives that you've got. And that will help you start reasoning as well about what are the sorts of protection that might put in place? Where have we got unnecessary complexity? I usually think of security as part of a wider piece of service design so that complexity piece becomes really important. And it works best if you've got the opportunity to change systems as a whole, not simply secure them as they stand, but either way, this is worth doing. Then you want to consider your threats. There's been a lot of talk all over the place about threat models in the last couple of years. Since the, particularly since the Edward Snowden revelation, there's been a lot more talk about personal threat models. How do I secure my communications? How do I understand the tools that I'm using? Unfortunately, too many of those conversations have lacked some perspective. It's quite exciting if you're working in security to think about the most capable threat actors. Who are the people who have the most resources to throw at this thing that I'm doing? Which nation state might want to come after me and break what I'm doing, get access to my data? But for most of what we do, most of the time, that's not the right place to start. Because as I said earlier, most of the risks, most of the vulnerabilities, most of the attacks come using very basic vulnerabilities. So going through the threats, think about who might want to break my system and why. The why is particularly important and it's about getting a sense of perspective. And that's something that was really important in that change to the classifications process that I talked about in the UK government. You see, when we defined that official level, which is where the vast majority of working government happens. One of the things that we said and we were happy to say publicly is that for information at that level, the services provided at that level, it is not worth our while taking great length to defend them against what you call advanced system threats. Very large organizations, nation states who are willing to spend years trying to compromise your systems. That's the way to throw away a lot of money. You're probably going to fail and you're not going to make the citizen better. You're not going to set yourself up for change and all of those other things you need. A sense of proportion is really, really important. A sense of where is the attack likely to come from? How can we cover up the basic? And then another thing that's super important to me is that you do this as a whole team exercise. For far too long, security conversations have happened in secret. And the risks are too significant for security to be done in secret. Because when you do these things in secret, you lose perspectives, you lose context, you lose the opportunity for everybody to bring what they know about your system and your service to the table. So when I say the whole team, I'm thinking very broadly about the service you're offering. If you have a call center, it probably means people from there because they know about where the service fails. They see things that the rest of you probably don't. You have designers, they should certainly be in there. They think about the service design overall because that might be how you simplify it. And then senior managers because there's always a conversation about what sorts of risks you're accepting. There's a bunch of practices that you can use to develop that whole team approach. That's very much what I'm talking about on Sunday. But just making the last few steps very visible to everybody, accepting that you're not gonna get them right first time, being iterative about it, doing show and tells, bringing people in. A lot of the sort of practices that we just want to do with agile development you can bring to all of this. And then once you've done all of that, I find it useful to start from the end. What do I mean by that? I mean, start assuming that something's gone wrong. And there are some techniques that can help you think that through. One of the best I've seen is Bruce Schneier's attack tree modeling. I don't know if any of you have seen that before. But the way that works, this is a very low res picture that I grabbed off his website. And there's a lot of academic literature about this. Start by assuming that one of those things that you actually care about has gone wrong. Then start thinking about the circumstances that could allow that to go wrong. Then start thinking about the circumstances that could have allowed that to happen. And follow it on down the tree. That will be human failings. That will be system failings. It'll be supplier failings. It'll be all sorts of different things. But as you do that, you can start to attach probabilities to them. You can start thinking about which of these are most likely. Which of them are you going to care about? So in this diagram, you can see eyes and peas. This is a very simple model where you just say some things are possible and some are impossible. And that will let you follow the different branches and see what's going on. If you can start to draw onto this sort of thing as well in notion of whose responsibility the different parts are, that can really help you reason about different third-party services you're using, what cloud tools you're using. And the other thing, I think there was some other talks that were talking about things like chaos engineering, using chaos monkeys and other approaches to break your systems, to see how they respond when you take certain pieces away. And I'm not sure, I think there might have been some conversations about game days and other activities. When I was working in government, we ran a number of game days where we got different teams from around the organizations to rehearse different types of failure. We also started investing in what we called red team exercises, where you'd get some quite skilled security professionals kind of come and simulate an attack against something and see how the team responds, see how it breaks. And where that works best is if you can get some, really, quite quality security experts and pair up other members of your team with them, so that they can observe how this happens, so that they can learn some of those skills themselves, so they can build it into their development and testing practices, and so that they can reason about it and keep testing more. If you have a good enough red team, your services will be compromised. It might take them a while, but it will happen. So when you do this, yet again, perspective's really important. How quickly does it have to fail for you to care about it? How hard have you made it? You're never gonna have perfect security. But this lets you start to see if somebody who's reasonably good at their job can compromise one of your systems in very little time, then you've got some work to do. If they had to put months into it, maybe that's okay. Or maybe there was something in there that, actually, it took them months to find something to fix. There's a bunch of thinking to do in it. But this starts to take you into a space where you can think about security as an operational practice, more than it's about a big design exercise upfront, and it's about responsiveness, and it's about being able to repeat all of these practices and keep improving them over time. So understand context, understand what's important to you. Think about who might want to break it, what their motivations are and what their capabilities are. Maintain perspective, and then assume that something's gonna fail and think about how you might test for that and prepare for it. Again, that's all just general security stuff. So what about cloud? Well, as I've kept saying, I really don't think clouds particularly special in respects. I think that far more of what we need to focus on is good security practices, and we do them in the context of cloud because that's just how you do technology today. But there are a few things that perhaps have risen to the fore, but I just wanna run through quickly before we wrap up. So first of those is identity. If you go to those cloud security principles, the idea of understanding who can use your services, what are they doing is really fundamental. You can see that in point six, you can see it sorted in point three, nine, 10, perhaps 12, 13, maybe 14, it runs right through. The nature of technology these days is that we are using lots of different tools. We have distributed systems, complex systems, with a mishmash of software as a service, self-developed software, cloud services. We need to get better at managing identity across those. A lot of organizations are vulnerable because they don't know how to close down all of the accounts of their staff when they need. Single sign-on type practices, the use of patterns like OAuth, standards like SAML, those sorts of things become really, really important here. And there are better and better tools for managing that. So when I'm evaluating any sort of cloud service, that's one of the things I look for is there gonna be a single sign-on integration or if not, are there some relatively simple APIs I can use to manage user access? The next one is very similar to what you're here if you're in a lot of the operations content, that DevOps site reliability engineering, that kind of thing. Emphasize observability. Now one of the biggest fears that's regularly reported that CIOs have about cloud adoption is a loss of control. There's this myth that we knew what technology was being used in our organization, so we could lock that down and control it. It has never been true, but it is the case that there are now a lot more options for staff to very quickly try out new tools, spin-up prototypes, scale services, in a way that could be invisible to the organization. That's a really good thing for the most part. It lets us move at pace, it lets us harness new tools and get better at our jobs, but look for ways to observe what's going on. The big infrastructures and service providers all have focused quite a lot on unified logging systems, of ways to understand at a billing level what resources are being used, and at a day-to-day level what events are happening, what resources are being started and stopped, who's authenticating where. And then there are tools that will let you aggregate and analyze that, so look for those, make use of those. Unfortunately, that sort of practice isn't as common in the software as a service world. I'm hoping that's going to change. There are some things that Salesforce will give you some sorts of access, but you have to pull a lot of that through their bespoke APIs. Some other products have similar things. If you're using Google Apps or Office 365, they've got reasonable good things. I'm hoping that some standards start to emerge that we can expect to software as a service tools, so we can say, I'm gonna use that, I'm gonna consume the logs, I'm gonna put it on site, everything else and understand what's going on across. So look for that. And one of the big challenges that a lot of those companies have is sort of startups making software as a service. So people don't go and ask for this stuff so they don't know to prioritize it. It's not that it's hard for them, it's that we don't ask, so ask. And then we need to think about how we build quality in. And a lot of that goes back to the stuff that's in the agile application security book. And a lot of it's common with all of the other practices that, again, we're talking about today. We need to think about all through, have we understood the context that will bring to any decision? Have we understood what we're aiming for here? Have we built an understanding of how this could go wrong into all of our other practices? And have we thought about how we're going to pay? Have we made that part of our prices and part of our practices? And with that, again, like everything else we're talking about today, we need to make change easy. The NHS WannaCry incident, which had such a major impact across the UK, was mainly there because it was really hard for those organizations to patch all of the systems they had. They've not designed for that. It wasn't a thing they worried about at the time. It wasn't a thing that they'd invested in. That has to change. So when we're doing new software development, that's a relatively easy thing for us to do now. We've got a lot of established practices around continuous delivery. We also need to think back, how are we starting to affect the systems that we've inherited to do that? And sometimes that in itself can be the case for change. Even though we've got fairly stable old legacy systems that have been in the jobs for years, and mostly keep working, if you can't patch them, if you can't be security updates, perhaps that's your case for moving them, which will then have knock-on benefits because you can move to more modern architectures. But for me, one of the biggest benefits of cloud is the way that it democratizes access to technology. Now that means at the infrastructure level that probably most of us tend to think about, it's become much, much easier to summon resources through an API or a command line script or prototypes or anything to test it and then to scale it. Whether we're using an EC2 virtual machine or whether we're further along, we're using a Lambda or something else, it's much easier for us to do that, so more people can demonstrate their ideas in practice. But so too is it much easier for any member of staff to see tools online and adopt them if they'll make their jobs better. That's a huge opportunity for us to continually be finding the opportunities to make use of everybody in our organization to find, think about, harness that. And that's also one of our biggest challenges. And it's where thinking about security needs to meet with, general thinking about what does leadership and management mean in the modern age. Because in everything we're doing, we can't just make top-down decisions because we'll miss out on opportunities. We need to do a lot more work to help everybody understand the context that they're working in. And with security it's very easy to get fascinated by the details. But in the words of one of the government design principles that I talked about yesterday, we need to do the hard work to make that stuff simple. That's one of the things that I'm keen to come back to about those cloud security principles I used earlier. They were really, really helpful for a certain group of people who understood what those terms meant or had the time to read a lot of the guidance behind them. They weren't much use to an individual member of staff who wants to adopt a new project management tool because they think it will make their team more efficient. And they don't clearly speak to the fact that you've got an adoption cycle of new technologies. And that the risk you carry when you're trialing something with three people on a small team is very different from the risk you carry when it's business critical across an organization of 300, 3000, 30,000, and so on. There's more work to do there. I don't have all the answers for how we do that. But I hope that we're gonna continue to have a conversation about how do we express this stuff and how do we help people outside of our circles reason about it. Because for a far too long, the way that we've approached security is to think in terms of locking things down or putting controls in place. It doesn't work. Because when people need to get their jobs done, they will find ways to work around controls that have got in their way. Most people want to do the right thing most of the time and we need to start from there. As a National Cyber Security Center has started saying, if security doesn't work for people, it doesn't work. If we're really gonna get the benefits that we get with cloud, we need to think about how do we spread out the context, the understanding and make it easy to reason about these things. I think that doing the work ourselves that I talked about, about understanding our organizational context, understanding our assets, understanding the threats, and working out how we're going to test is fundamental to that. It's really important that we do that in a way that's open and accessible so that everybody can engage. So that's a bit of a whistle-stop tour through a bunch of different ideas. I'm going to develop on some of them on Sunday. There's still some space if you want to come along. I think we've got a little bit of time for questions or I'll be around for a while. My question is on the principle of bills, quality in which you spoke about. It's a very standard principle. It's slightly, two questions baked into one. One is the whole notion of quality for a cloud. Do you think that we should look at it differently from say our on-prem applications that we have been used to? And secondly, again, quality as well as security for a cloud application versus a cloud native application. Do we need to think about those separately? So I think you can think of quality as having a number of levels. And I think at the top level, quality is about our ability to provide the right service to the right person in the right way reliably. And I don't think at that level, our understanding of what is it that we're trying to achieve is any different on-premise or a cloud. The next level attributes might look different because our options for how we implement some of those things become different as we've got more elastic resources. We've got a different set of practices. So the way that you might approach resilience and availability of your service in cloud, where it's much easier to fail over and build redundancy in, would look different from on-premise. I think it's quite important to be thinking at both of those levels. Say what is it that we're trying to achieve overall and then what are the tools available to us to do that? And again, I think the same thing is true if we're assessing an old-style application that we've migrated to cloud as software as a service application or something that we've built ourselves to cloud-mated principles, that our options for doing that will be different in each case. The overall goal that we've got should be consistent. And then we need to reason about which of those approaches is most appropriate. So if you have a very high level of concern about confidentiality, to the point where a contractual guarantee that the system administrator at a third-party company won't disclose any of your data, isn't good enough, if you're not comfortable just with that legal guarantee, then that probably means that you need to avoid most software as a service because very few of them are designed to have per-client encryption that they can't break. What's going on today? So, yeah, you need to start with that highest level and then you can start to reason about will I get the guarantees that I need using the different approaches available to me?