 Today we're going to talk to you about how to secure your infrastructure as a service environment in one minute. Obviously our talk is not one minute, but we'll try to secure it in one minute. Obviously you'll see it after it will be secured already. So a short introduction before we get into the details. My name is Nir and I'm a public speaker as well as managing the security for the retail division within NCR. Here I'm speaking about something that is my passion and just one thing that you should know about me. I like sport, but just not sweating sport. That's about me. I'll let Moses introduce himself. Hi all, thank you for coming. My name is Moshe and actually I don't like sports, but it's pretty much the same as not liking non-sweating sports. I have been working with the innovation scene in Israel for the last couple of years working with startups. We have a lot of startups in this neck of the woods as you probably know. And I've been examining their challenges in the last couple of years, how they adapt cloud, how they handle cloud security and this is where the stock is coming from. This is from our experience regarding the new adoption of cloud services. Now cloud and cloud security is such a lot of large words. I'll try to emphasize exactly what we mean. I try to focus the talk. First of all, we're going to talk about IS, infrastructure as a service, which I'm pretty sure you're familiar with this term. Infrastructure as a service, Amazon Web Services, AWS, Google Compute Engine, Azure Rackspace. Those are basically the providers that we talk about. Inside infrastructure as a service, there is a layer. It's a relatively new layer been introduced in the last, I don't know, five to six years. It's the orchestration layer. It's basically the layer that enables the automation, the allocating the resources between the different cloud services. It's the layer that will spin up your virtual machine, connect the instances, connect IP addresses, and then the storage, basically a very important layer. And it's also the one that needs to be addressed when we talk about security. So if we want to focus what we are talking about, we are talking about how to use orchestration in order to increase the security in IS environments. And why do we need this talk? I mean, why has changed? What is the, basically the attack vectors that we're talking about? What you see here is basically the attack vectors that are either unique to cloud or most of them are simply amplified by the different cloud characteristics. We're not gonna talk about all of them. We're gonna focus on free. But I'm gonna give you a quick briefing just for the background. We have the provider administration. Again, somebody is managing our data. We have the management console, which basically allows you to access to the infrastructure as a service. And it's a very wide dashboard. You can do so many things with it. You can access almost every aspect of your organization. Very scary attack vector. We got the multi-tenancy virtualization, right? Everything on infrastructure as a service runs on multi-tenant environment using virtualized software and hardware. So basically it's also an attack vector. I'm not gonna talk about this one. There's a lot of talk about hypervisor security, sidechannels attack and stuff like this. Different talk. Automation and API. Everything is the cloud is API based. I mean, everything you do on the dashboard you can do without the dashboard. And this is what usually most cloud programmers do. And it's also about automation, right? You move to the cloud in order to automate stuff. Also an attack vector we're gonna talk about. Supply of chain attack. You buy a software from a software as a service vendor. He builds the software on top of platform as a service or infrastructure as a service so you have to secure the entire chain. Not gonna talk about this one either. Sidechannel attacks. Again, the things that come from the virtual environment. Insecure instances. In the cloud it is very easy to launch instances, spin up instances. Sometimes because it's so easy we launch them and we forget about them. We forget to harden. We forget to do all those important things we used to do in the traditional environment. And of course this is another thing that we are going to talk about. So we're gonna focus on those free attack vectors, the management console, insecure instances and automation API. Now let's take a look how those attack vectors are being used in the real world attacks. Let's take the story of browser stack for instance. Couple of months ago, browser stack company, it's a software as a service company installed on top of Amazon web services which is basically the highest provider for them. They were hacked and this is what happens. This is how it went. An attacker found his way in through an insecure instance. Basically an instance they spin up couple of years ago, forgot about him. It was standing there with a shell shock vulnerability. This is the exit, sorry, this is the starting point. He managed to go in. He found an API access key. API access key is basically, if you give somebody an API access key which has full permissions, it's like giving him the keys to your data center. This is basically about the same thing. So he found an access key. With this access key, he managed to whitelist his, sorry, he managed to spin up a new instance and whitelist this instance in the firewalls. Once he has an instance running, with firewall rules opened, he attached a backup disk. Inside the backup disk, he found a database connection string and basically from there on, it's very simple how you move on to the organization data. So again, attack vectors, insecure instances, forgot to lock them down. Automation and API. All of those cool stuff you can do like connect a backup disk. Whitelist an IP address in the firewalls. Those are the automation and APIs and of course the white dashboard that allows you to do so many things like connecting backup disk and changing firewall rules for instance from the same dashboard. So those are the new attack vectors that we want to cope with. And I say that we don't have the good enough tools to do so. We simply don't, we haven't adopted the security to be good enough for those environments, for those new infrastructure and also new software development methodologies that are coming. A lot of software development and also infrastructure services have been changed because of infrastructure as a service. We now have auto scaling. Once your server hits 80% CPU load, it will replicate itself. Automatically we have entire environments that are spinning up, processing like 200 servers spinning up at once, processing and terminating after 10, 20 minutes or even one hour. It's not something we used to on the traditional network. So it's accelerated life cycle. We can see a lot of environments that are instead of upgrading them to new versions, they are simply being terminated, oops, sorry, and being launched with new instances, with the new software. And what is the one one thing? The infrastructure, the way you charge the infrastructure is changing because the proud provider have started their billing cycle every one hour, they're reducing it. So you can bill your servers for one minute or 10 minutes. So it gives the organization more incentive to lower the time that the servers are up. And why is that a problem? Because so many of our corrective controls in security are based on maintenance windows, right? We do patch management, vulnerability scanning, penetration testing, all of this is done in periodic maintenance testing, right? Patching Tuesday, sometimes it's once a week, sometimes it's once in a month, sometimes it's never. But you have a maintenance cycle and how can you do maintenance windows if your server are only alive for one hour or two or three hours? Security has not been adapted to those environments. And what happens? Companies are moving to the cloud because security, sorry, companies moving to the cloud in order that infrastructure will not slow them down. And what happens next? Security slows them down. And you know what happens if security slows you down. You see companies will simply give up on it. So this is the problem we think that we need to solve. Us in the security community need to adapt new tools, new methodologies, right? It's not even about tools, it's a new way of thinking. How we automate security, it's the next challenge. Developers learn how to automate software deployment, software testing, we are still way behind. This is how we started CloudFigo. CloudFigo is an open source. You can download from the link, the link is here, it's on GitHub. Everybody can download and take a look at it. By the way, it's based on the work from Rich Mogul from Securosis, I don't know if he's on the audience, but if he is, the entire credit goes to him. So CloudFigo is a tool about automating the processes that were mentioned. By the way, we understood the importance of creating a tool. So we decided to invest in it. So basically, we're an investor in that. We invested the whole $5 in the logo. I hope you appreciate it. Okay, so this is the tool that we started. It's called CloudFigo. In the end, we give the details about everybody who wants to contribute. But first of all, let's talk about what does it do? So it's basically automate instance lifecycle, the instance operations in the cloud. I mean, we're talking about how to launch a servers, low security configuration, encrypt disk and volumes, scan for vulnerabilities, right? All of those stuff that usually require maintenance windows and then move into production. What we're gonna do next is basically show you a couple of those steps and what we do in those steps. But first of all, let's talk about the components that we used in order to do this lifecycle. You can change any of those components to the components that you use. We simply chose those usually because either they're free open sourced or because our environment was on Amazon web services. But you can definitely with little changes migrate it to other vendors. So what is the lifecycle? What is this accelerated, sorry, what is the component inside this accelerated lifecycle? We use object storage. In this case, we use Amazon S3, but you can use any other object storage coming from other vendors. We use vulnerability scanning in order to make sure that the instance is ready to go to production. In this case, we use Nestos. You can either use other scanners. Today there are scanners that are IWS aware, connect automatically to IWS APIs, can give you some benefits. We use Cloud Init. Cloud Init is a perfect tool for automation. If you're not familiar with it, invest five minutes reading about it. Cloud Init allows you to run scripts at root permissions when the server is launching. So it's a great place to put in your basically granular, adapted scripts for your environment. For configuration management, we are loading Chef. You can use either, you can use any other software configuration management. We just use Chef because for our purposes it's very convenient and free. We use IAM roles, which is basically Amazon name for permissions for servers. Okay, you give permission to different servers because servers interact with the console, right? Usually when we talk about permissions and roles, we talk about user to dashboard. Amazon IAM roles can give permissions to instances, to servers. What they can do with the Amazon APIs, what they can do inside the Amazon environment. A lot of the research we did, a lot of the development we did was in order to make sure that we have the right configuration and basically access controls. And we will elaborate a little bit more later. And we do volume encryption. Cloud goes into encryption by default, right? The only question is how you do the encryption. We demonstrate a way how to automate both encryption creation and also keeping the keys. I'm not talking about, basically what I'm talking about is volume encryption, right? Not encryption in the database. I'm talking about OS level encryption that I am encrypting basically the volume. What will be in the volume? Usually you install your database into that encrypted volume. So you make sure that nothing like browser stack can happen to you, that somebody can get a hold of your backup. He will not be able to snapshot your backups, right? He will not be able to use the data inside of it. So this is the life cycle that we're talking about. We're talking about how to launch an instance, updating it, controlling it, scanning it, moving it to production and then terminate it. Basically the life cycle of every average cloud servers. What we do now is basically overview those steps, those phases, and then give you a quick demo on each one of them. So when you launch an instance, every machine handles its own encryption keys. It started by a remediation, when a machine is launched, it started in a remediation group. Only when it's ready it will be moved to a production group. Basically it's a methodology we know from network access controls, right? You, network access control prevents workstations from connecting to your corporate line, right? How they do it? They make sure you're okay. When you're okay, they move you to production villain or users villain. Management of those attributes usually required permissions. So we start, and usually those permissions are higher when you're talking about the launching phase and when you move to production, you want as little as permissions possible. So if we created something that is called dynamic IAM role. Nir, can you explain a little bit what you did over there? Yeah, so basically since we wrote the API calls to Amazon, we know which API calls we have in the code. So therefore we created a list of the permissions that we need during launching the instance. But actually we created the concept, the new concept that at least we call it dynamic IAM role. So basically when we launch an instance, we can assign only one role or very specific role to this instance on Amazon and we won't be able to change the role. That's how Amazon works. So that's the reason why we decided to just edit the role when we're moving it to production. On production, we won't need permission such as, I don't know, move instance from remediation to production or put encryption key on the storage. We won't need it. And that's the reason why we're reducing the permissions later. I will demonstrate it. This is how the role looks like. This is how the role looks like at the launch phase. A lot of different ACLs. It gives you a wide permissions. Later on we'll show you how, when the server is moving to the production, it is much reduced. Another thing we do at launch, again cloud in it, as I said, a great thing to automate. When the instance is launched, we simply inject all the scripts we want into the launching phase. Sometimes people ask you why we didn't use a predefined image. Predefined image basically is contradict the idea of DevOps, it contradict the idea of automation because each time there's a new patch, you need to prepare again a new image. So we prefer to use the latest images and do the initialization script when the server is launching. At this point, let's show us how it works. So before I'm starting, well, you know that all people at DEF CON not connecting to the Wi-Fi. So all the AT&T Verizon networks basically flooded. So we hope that our online demo will work. If not, we'll have a backup, but that's fine. So as for demo, basically, I wanna start with explaining what do we have with the Cloudifigo tool. So, wow, that's big. So basically, we developed the tool in Python. Since we wanna have API calls, it's basically exposed by Django and now we're just starting the server. On this server, we have our own API call to, let's say, just launch an instance. So we'll just go and launch an instance. So when we're launching this instance, it takes time. We'll see the Django. We'll see that we're actually creating a new role on Amazon IAM, but then we need to launch an instance with this role since Amazon have a pretty wide infrastructure. It takes time to synchronize between the IAM role that just created a moment ago with the instance we tried to create. So basically, that's the reason why we have timeouts. It's pretty common when you get into developments with Amazon. You'll see that you put a few timers here and there just to make sure that that works. And eventually, we see that we have here 200, so we should be good to go. So now I connected to the Amazon web services. I'll just like, okay. And we can see that we have here a new instance called Secured Instance. This is the instance that eventually gonna start and be the Secure Instance down the road. So when now we're starting the whole process, we can see that when it starts, we have remediation service group, which is where we're starting. And we also have a role that we created. You see, it's a pretty long name here. So I'll just click it. And I wanna see the list of what I am allowed to access. So basically, when I'll go to the policy, you can see that it's a pretty long list of what I'm allowed to do, but later, we will reduce it. So I'll let Mose continue with our explanation. Okay, by the way, we're sorry about the fact that the screen resolution is low, so you don't see the entire screens, but those of you are familiar with Amazon, probably understand where we are. Those of you who are not, we're basically looking at the EC2 instances launching and the IAM roles screen. Okay, two different models. Okay, moving on, the instances launch. As we speak, it's basically initializing himself. What happens next is we update the OS. Again, we don't have maintenance, we do to do patch management, we do it on spot, we do an upgrade to all packages. Basically, it is done through the cloud init script. We install all the prerequisites, everything we need for this cloud figure to move on. You want to explain what kind of things do you have over there? Yeah, so again, since we're using Python, we have basic packages, we have the Python peep wheel. Basically, we want to have pretty quick installations. We also have the Amazon SDK, which is Boto, Chef Client, because we already mentioned in the components, we have a management component. And we also have our scripts on S3 that only all of these instances allow access to the scripts and download them because we may have some configurations that may be kind of secret or you can define what you want to have there. So that's the reason we also remediated the access, the access control there. Okay, so, oh, it jumped. Okay, okay. The next phase, the update phase, there's nothing to show in demo, right? So we skip the net to simply install things and upgrade it, no point of showing you that those packages are installed. The next phase is the phase that I take this new instance and I harness it. I put it under my control system. Again, usually when it happens in the real world on on-premise network, the IT guys finish to install the servers, then they move it to the security guys, they wait a couple of weeks, the security guys do the hardening, do the configuration, install their antivirus software, IPS software, all of those happens, those are stuff are really slowing down the progress. We want to do them really fast, including all those tasks that security guys need to do. So what we did, we basically at this point, we, the cloud init installs the Chef client. Chef again is a configuration management. In Chef what you do, you build a recipe, sorry, a recipe is basically a list of packages and commands that you want to run. We attach the recipe to the servers and then it downloads everything we need from the security point of view. There's a lot of Chef recipes in GitHub or everywhere else. You can use them to automate almost every aspect of your operations. So once the client is registered, the policy is downloading and then we, what we do is generate an encryption keys. As I said, the goal of encryption is basically to protect the disk. Inside the disk, probably you will have your application files or database, doesn't really matter. We use Demcrypt, it's basically a utility for Linux. Very common, you can use any other utilities out there for Windows or Linux. But the idea here is where you store the keys. Usually when you're working for infrastructure as a service you have a couple of options. You can save it with the cloud provider, right? Some of the cloud provider will even give you HSMs or places that you can store keys. It's okay but it's still vulnerable to some attacks like malicious insider inside the cloud provider or basically subpoena from government or other court orders for stuff like this. It's basically good enough for configuration for some organization, might not be good enough for other organizations. You can save it on premise, right? You control the keys. Then you can control who has access to it, you put it on an HSM. But then you have to think how you transfer the keys in and out of the cloud, therefore you expose it again. Every method has its pros and cons, right? So we need, probably if you're a bank you want to keep the encryption keys in your hands and transfer it somehow to the cloud, transfer a temporary key to the cloud. You can also use a third party. Third party can be a key escrow service. They have key, today we have companies that provide you an HSM as a service to put the keys. This way it can protect you also from government weight at some point, yeah? Because government warrant has to come to both cloud providers, which complicates it. But again, you need to move the keys between the different providers. So again, every method has its pros and cons. It's basically depended on your threat analysis. What we did here in Cloud Defigo is we built a system that allows you to be very flexible. In what we show here, we keep the key in a special place inside S3, the object storage. You can very easily, if you're working on Amazon, you can very easily migrate the application to keep the keys in a different cloud provider, object storage like Azure or a Google compute engine and then do a third parties or in your premises in order to make sure that the keys are, I would say, good enough security with them. I mean, it's not bulletproof, but they're quite protected. We invented basically a system that NIR will explain how do we access those keys on the object storage? So I will just translate good enough. Good enough equals annoying, okay? So we wanna make it annoying enough for anyone to access the keys. So basically, we put our keys on S3 on the object storage on Amazon. What we're actually doing is, during the launch of the instance, we're generating a key and we're not storing the key anywhere on the instance. Basically, we're creating a key NID and using the ID, which is basically combination of few parameters that we get from the instance like MAC address, instance ID and few things that are not changed on the instance, we created actually a bucket with this name. So it's a hash. It's pretty hard to guess that. You can reverse the code. Again, it's annoying, but you can do that. But eventually, when you'll get into the object storage and when you'll get the name of the object storage that stores the key, you will also need to face with two things. One, you'll have to access the storage with the same account that running this instance. This is one thing. And the second thing is actually a referrer header that we added to each request when it goes to the storage. And this referrer is basically 512 of additional data that we generated. So basically, if you'll try to get access to this object storage, it will be annoying, really. Okay, so basically what we did is creation of a dynamic policy on S32. So we have also dynamic policy with the account name and the Shafi 512. And let's just do the demo. So I'll just move to my shaft. Okay, so here, first of all, it's just a proof. We succeeded to connect the server to Shafi. To Shafi, it's not difficult. Basically here, we can see the run list. So our run list contains volume encryption. You can have a patch installation, database installation, whatever you need with this script. And that's kind of the first part of it. And the second part, as I mentioned, we already encrypted volume, but we need to know where is the encryption key. So therefore, I will need to connect to the server because I have no other way to know which volume I should connect to. So let's go to the security instance. Let's connect to it. So I'm getting into the Cloudy Figo Log. It's just for demonstration purposes. A couple of ventors. Oh yeah. Is it better? Okay. So, and still I'm writing below. Anyway. Yeah, but I have the log here. So probably you will see in the log that I have a bucket name like this. It's a pretty unknown bucket, but we'll need to look for this bucket on S3. So let's go to S3 and look for that. We're on S3. As you can see, we had a lot of demos. And here, we will look for the bucket and we'll go to the properties. So if you look at the properties, we'll see that here in the permissions, we have edit bucket policy. And you can see here that we actually created access only to the specific bucket, only with a specific refer header. Can you put it up? Oh yeah, I need to put it up, but I can't. Yeah. Anyway, Moshe? Okay, moving on. So now we got an access that is, we have an instance that is controlled. We have all the software we want. We have launched the encryption. We got the keys. What do we do next? Is this instance ready to move to production? The basic question would be, is it hard enough? And is it, doesn't have any vulnerabilities? So this is where we do an automatic scan to launch for the instance. The nice thing about cloud is it enables you to automate the scan and then move the item to production immediately, change the firewall rules, move it to different security groups, all of those games that can be done automatically. So what we do here is launch an ESO scan. We analyze automatically the results. I think we say that everything over medium, if something is over medium, if we get a finding in the scan that is over medium, we don't move it to production. It will stay on remediation. Anything lower, medium or lower, it will move into production sending us the result. Here, can you show how it works? No. Okay. So we'll go to the Nessus. I'll just need to exit the deck. Okay. So basically we'll go to our Nessus. Yeah. Oops. And please don't record the IP address. Don't access it. No, it's not secure. Believe me. Just very long, it's going. A moment of relief. It's so hard to do live demos. So we have here the scan. Basically we can see here that we have one low, one informational finding. So that means that the server should be on production now. Let's look at it. Low resolutions. So we'll just refresh it. And we'll see what's going on with our server. So I'm going down. And we can see that the server is in production. Okay. Excellent. Yeah. It's working. Okay. So we got a server, it's moving to production. From now on basically we finish the launching phase. I don't know the initialization phase, I wanna call it. We move to production. And couple of things we need to remember in production environments. Permissions are lower. Okay. The server has now done everything he needs in the automation part. We need to reduce permissions. This is the IAM role after cloud figure finish configuring it, right? If you remember the IAM role permissions in the beginning were really that big. Now it's turns down only to very specific couple of things, basically access to the S3 bucket where you can find his encryption keys. So it's done dynamically. We will reduce it dynamically. And then we put the ongoing management, right? What kind of ongoing management? Usually in cloud you use compensating controls. Basically what it means is we are checking to see if we haven't launched, if somebody haven't launched any instances that is not managed by the infrastructure that we've created, right? We want to identify if there are servers that have somehow popped up somewhere and we are not managing them, right? And we want to use an alarms if somebody has managed access on server and now is trying to do something, right? How do we do those? Basically for the first thing, for checking out if there are servers that are managed or not, we're pulling out from Amazon the list of servers and we compare it to the list of servers that are in the chef. Bottom line, if we see a server on Amazon which is not in the chef directory, it's probably not a good server. Somebody launched it either by mistake or a malicious server. The second thing we do is we monitor Amazon cloud trail. Cloud trail is basically a logging mechanism for every activity you do on the dashboard or API, right? So basically what we do is we look for those things in the logs. Those of you who have been trying and playing with Amazon cloud trail, it's not well documented. Again, it's new, nothing better to say about Amazon, it's new, not so much experience. Here is what we found out. If you try to use the access keys, sorry, if you try to do something on Amazon servers and you get an access denied because you don't have permissions, it could come up as two different logs. One of them is access denied, one of them could be client authorization operations. Usually if you try to do something in S3, you get a client on authorized operation on other places, you get the access denied. So we are looking for those two in the logs. This is pretty much, this will be very useful for you if you'll be playing with cloud trail. Again, it's a great tool. Just needs some more, we need some more experience with it, ask the community that is. So basically, let's take a look on the roles and the alarms. Okay, so we'll get into the production role and we'll see what exactly we have there. So we'll go back to this role, this long role, we'll refresh it and we'll see what's going on here. We can see that the policy allows us only to access very specific object on S3. So this machine cannot do anything anymore. So let's do a test, let's see what happens. I'll go back to the instance and I'll try to, you can't see the text but I'm promising you that you'll see it in a moment. So I'm just trying to access with AWS SDK to a specific instance or just IEM resource. So I'll just try to list access keys which is pretty much something that as a hacker I wanna do. And as you can see here, I'll do, yeah. As you can see here, I'm not authorized to do anything but as Moshe mentioned, we're gonna have an alert to present it. The thing is that, as I mentioned at the start, it takes time to synchronize and get the alert so we have two options. Either you will wait 15 minutes or I have recorded that, so probably we'll go with option two. So on option two, basically when we'll go to the alert. Just to emphasize, it takes Amazon 15 minutes from the moment you do something until it's popular, it's basically to the logs, okay? So we don't want you to wait the 15 minutes so we got a recording. So just the short demo, basically, when we go to the cloud watch, we'll see our alert which is the same as what I did now. I just tried to access to the keys and eventually I should see the alarm here. In the same moment Moshe can choose if you want to get an email or other way to get everything, to get the keys. Yeah. Well. Thank you guys, it's a methodical break. We have a little tradition here. You know what it is. How do I get shout outs from the crowd? I can't do it anymore. Break! Welcome to DEF CON. It's good that I'm already past the demos. Okay, so let's go and just continue. It's a small, small port. You didn't see me last night. So anyway, we also decided to validate exactly what happens with machines that are not managed by the CloudFigo because basically you can launch instances but it won't be controlled. So in this scenario, we basically took the list from Chef and the list that we have on Amazon compared it and provided the output of what is not managed. So if we'll see here, where's the browser? I'm fine, believe me. So basically we have another API call in CloudFigo. We have one server, we can translate it to the name but basically this is our necessary server that is not managed. Just wanted to show an input of something that we have in the list. Your turn. Yeah, okay. I think we are wrapping things up now. The next phase, yeah, cool. The next phase will be basically termination, right? Server is launching, working, processing. Last thing would be termination. Basically you need to terminate the server with the attached volume unless you do some kind of backup and moving it to different servers. You need to kill the iron roller, right? We're building a role specific for servers and the most important thing, some of the attacks on the cloud are basically on data that's thought to be deleted and it stays somewhere, right? Cloud provider don't like to really delete stuff, right? They put it on a shelf somewhere and waiting for you to say, okay, can you restore that for me? And they say, yeah, we can do that for a nice amount of money, right? But the problem is your data is basically being replicated to all of those places. How can you do? How can you make sure that you're really deleted? There are a couple of ways to do it. What we did here is basically what is called crypto shredding. The idea here is that data is encrypted so we need to terminate the key. Once you terminate the key, the data is useless. Again, it's dependent on your scenario where you keep the keys. Over here we deleted from S3. You could say that S3 is backup also and you are correct. But again, according to your threat, you keep it in the physical location, you can also destroy physically the key and then your data is basically protected. It could be kept somewhere, but it's useless. So don't forget about this crypto shredding, the shredding of the keys in order to make sure that your data is safe. So this was the last phase and we're also pretty much closing in, wrapping things up. So what we want you to take out of this, new software development methodologies and new infrastructure services are basically changing the way that we treat applications. On on-premise, a production server was like holy grail, you don't touch it, right? Periodic maintenance once every six months, pizza nights, people treat it like a holy things. You don't really want to mess with it. In the cloud, we can see entire production environment change five times a day. Deleted, launched again, right? This is DevOps, this is continuous integration. It's changing and security needs to change with it. So you need to learn how to automate your security. You will not automate your security, you basically will be left out and they will call you once every while just for you to give something, some kind of opinion, but they will not do security into the production servers. They are in the IT department, right? So we need a new thinking and think about how to automate security. This is the new challenge for software development companies. So hopefully we are demonstrated enough how to do automation. What are the different phrases? You can take it into different areas. You can use Cloud of Figo or build your own. You have the right steps to do it. And I think we have a couple of minutes for question, Kevin. Two minutes for questions. If you have any, be happy to take ones. If not, come and take us later. We'll be around. You get our Twitter handles and any other way to access it. And before the questions, we're also going to post the updated link if you want. First of all, you can follow us. We're going to post about the Cloud of Figo and you can also get into the website. We're looking for contributors. We need to improve our documentation, our features. So you're welcome to join. Thank you. Thank you for the question. One of the things you talk about is instances that appear that you aren't expecting. What about instances and such that don't die when they should have died? Instances that died that shouldn't have died. Instances that didn't die when they should have died. In other words, you're looking at all these instances. You've got lots of instances, lots of roles and you're expecting, say, these particular servers to only be around for, say, 24 hours, but this particular instance has been around for eight months and you just aren't aware of it. Yeah, I agree. We thought about handling this. I mean, this was one of the phases that we thought about doing. But then we took a look at, I think it's called security janitor from Netflix. And it's a pretty awesome tool. It's a janitor, basically. I don't remember if it's a monkey janitor or a gorilla janitor or a security janitor, something like that. And what it does, it overviews the configuration and it terminates all the unnecessary instances, roles, and all of those garbage that is left behind. So basically we said, okay, there are good enough tools so we won't go into that. But I agree that this is definitely a challenge and needs to be addressed because there's a lot of junk that is piling in. Well, because I mean, as an attacker, using this tool, if I can just jam your shutdown procedure, that's almost good enough. Yeah. I agree. It's a problem, but we're not solving every world's problems here. Thanks a lot for the comments. Any other question, guys? No more, you can come. Sorry, no more questions, Kevin? Unfortunately, no. Sorry, come over to talk to us. Okay. Thanks a lot again.