 Alright, so hi. I'm Chef. Good morning. I'm the guest emcee for this one because your next speaker Matt Nash is a friend and colleague of mine. He and I worked together for NCC Group in Texas. So I've worked with Matt for the last four years and when I heard about this research he was working on I encouraged him to submit because I think this is a wonderful topic for the packet hacking village. And so Matt I'll let you talk about yourself a little bit more but welcome to the wall of sheep and break a leg. Thanks Chef. Alright man. So as Chef said, I am Matt Nash. I am a security consultant with NCC Group and this is my talk head in the clouds. So what are we going to be doing today? We're going to be looking at the configuration of accounts within Amazon's AWS and then also Microsoft's Azure. Some of the resources that were put into this talk were native tooling developed by Amazon and Microsoft. Security purpose tooling that was developed by NCC Group and a third party best practice reference. So configuration reviews typically fall into two different categories. We're looking at services or the hosts or VMs themselves. Today we're not going to be looking at the hosts. We're going to be focusing specifically on the services. So the first thing we're going to look at is Amazon's AWS. AWS is Amazon's web services. It is on demand infrastructure, computing resources, storage resources, analytics, application services. They are available and scalable. And most people think of Amazon's AWS as infrastructure as a service which contrasts with Microsoft's Azure, which we'll talk about a little bit later. So the tooling that we're going to use here is AWS CLI. It's developed by Amazon for querying AWS and interacting with the API. Another thing we're going to look at is Scout Suite, which is a tool that we developed for doing automated collection of data on AWS. And then CIS Benchmarks, which is a third party best practice reference. So the first thing that we want to do when we're going to assess an AWS environment is we want to set up our auditing account. We want to apply a couple managed policies to it. The first one is read-only access, and the second one is security audit. We also want to establish programmatic access for the API, which is an access key and secret key pair. And then not absolutely necessary, but it's nice to have access to the web console so that you can actually go through and click in the UI and look at that representation of the configuration. So with the access and secret key, we need to put those into something that we can feed into AWS CLI. There are a few other methods here that we're not going to cover today. The one that we're going to look at is the credentials file. And that credentials file looks like something like this. If you run the AWS configure without a profile argument, you will get a default entry, which is a top one you see there. And if you run it with a profile name, in this case I use profile one, you'll get that second entry there. So the first thing that we want to do when assessing an AWS account is we'll dump out the account number. So we know which one we're looking at, and we can also apply this to any findings that we come up with. So we'll run this command and we get the account number. We'll use that for our reporting so that we can identify what account the findings go to. And so the first service we're going to look at is IAM. IAM is the identity and access management service. This deals with authentication or how your users sign in. Authorization, which is permissions against resources or anything else in the account. So in IAM the first thing we want to look at is the list of users on the account. So we'll run this command, we get some output. You can see here I've truncated it. You'll see an ellipsis on several slides just because we're dealing with a lot of data. So we see a user here, A Walters. We'll go ahead and take a look at that account. It was created on May 16th of 2015, so it's probably an older account. Probably good to look at. So first thing we want to do on a user is get a list of the access keys. So this is what gives them programmatic access to the API. So we'll run this command, specifying the username. We get some output here. We've got the access key ID for that user. And we see that that access key was created May 27th of 2015. So not long after the account was created, probably an old key. And so we'll grab this access key ID and we'll use it in some future queries here. So the last time that this access key was used, when did the user actually use the API? So we'll take that access key from before, plug it into this command, get some output. We see the last time that they used it was June 16th of 2015. So this account hasn't been used in over four years. Probably a good thing to audit. So in the previous command, we looked at some user data on their access key specifically. Another thing you can do is you can generate a credential report, which gives you multiple access keys associated with the account and anything else associated with that user like the last time they logged in, things like that. And you can generate these every four hours. Amazon doesn't want you just hammering with requests, so they put a time limit on there. So we'll run this command. We get the output of started because we haven't actually run the Generate Credential Report on this account before. And next thing you want to do is actually collect the report. So we'll run this next command here. And you'll get a very large base 64 encoded string. I've truncated it here. What we need to do to be able to read this credential report is decode it. So I've got a little cut command that I'm piping this into that's looking for the columns associated with the rotation dates of the keys. So we'll run this command, get some output. We see that A-Walter's key was last rotated May 27th. So this is definitely an old access key. We need to go and audit and see if maybe that user still works with the company and maybe revoke their access. Next thing that we want to look at is root account usage. So root account in AWS is all powerful. We don't like to see that account used habitually for administrative functions. We like to see an IAM role created for any of the permissions that you want an admin user to perform. So this is a very similar command. It's looking for the last login date of the root account. So we run this command. We can see the last time that this root account was used was July 2nd of this year. So probably habitual. The recommendation would be to lock away that root account, change any of their access keys and just not use it if at all possible. Next thing that we want to look at is the account password policy. So this affects all users on the account. So we'll run this command. We get the password policy here. You can see that the minimum password length is eight characters. It's pretty short. But what is also problematic is a user can use a password as long as it's not one of the last three used. So if a user figures out, hey, I changed my password three times. I can reuse my old password. You can see how that might be a problem. And what could be problematic with this configuration, depending on how they have the environment set up, passwords are set to not expire. That may be intentional. It may not be. You would want to go back to whoever runs this account and make sure that that's what they want. And also users are not able to change their password. That may also be a design decision. So we want to verify that. So some other common issues that we see with IAM, policies that allow IAM pass roll and STS assume roll with a wild card. So this could be for wild card resources. This could be for wild card principles or users. If you had wild card for principles, that means anybody can do whatever this specific action is. And if you had resources, they could perform it against any resources. Another thing that we see is users without multi-factor authentication. That's a second factor. So if their access key and private or secret key were compromised or if their password were compromised, you would have a second factor there preventing an attacker from compromising the account further. Policies that allow not actions. So these are effectively blacklists. We like to see actions that are specified because that follows the principle of least privilege. Not actions essentially say you can perform everything except this action. And also cross account assume roll policies that lack an external ID or MFA. So the external ID can either be an account number for the account that you want to access your account's resources, or it can just be a random string that you come up with that's unique to that third-party account. And then MFA, we've already discussed why that's a good thing. If the third-party account is compromised, this adds an additional step to keep an attacker from compromising your account's resources. So next service we want to look at is EC2. This is the Elastic Compute Cloud. It deals with scalable computing resources which includes instances or VMs, network interfaces on the account, network access control, security groups, and any load balancing that's in place. So the first thing that we want to do in EC2 is get a list of the VPCs or virtual private clouds. So we'll run this command. We get a list of VPCs, again truncated output. We see VPC IDs. We've got CIDR blocks that are associated with the VPCs. So we'll take this VPC ID and we'll use it in some next queries. Do the VPCs that we've found have Flow Logs enabled? So Flow Logs allow you to gain insight into the network actions that are going on relative to the VPCs. So we'll take that VPC ID from before, plug it into this command, and we get some output here on the Flow Logs. And you can see that Flow Log status active shows up in this output. If the Flow Logs were not enabled on the VPC that you're querying against, you would just get an empty set like here. So the next thing we want to do is get a list of network interfaces. So we'll run this command, which is the describe network interfaces command. We get some output. This is actually going to be one of those queries that generates a lot. So you'll probably have to scroll through to find the information that you want or grep it. So we see a network interface ID here. If we scroll down through the output, we can look for interfaces that are exposed to the internet. And in this case, we do see a public IP. And we also see a public DNS. So this interface is exposed to the internet. Another thing we want to do is get a list of subnets on the account. So we run this command. We get cider blocks that are associated with that subnet, and also subnet ID and the VPC that it is associated with. We also get the availability zone and the ID of that zone. So we'll take this subnet ID and we'll use this in some future queries. So what NACLs are associated with this subnet? We'll run this command. We'll get a list of NACLs. And we can see this subnet ID has a rule action of allow for egress false. And in this case, egress false means ingress. And we can see this inbound connections for the cider block 0000 slash 0 or everyone. So the next thing we want to do is get a list of security groups. So NACLs and security groups are similar, but they are different. Security groups are in front of instances, whereas NACLs are in front of subnets. So we'll run this command. We get a list of security groups on the account. And so something we look for is default and launch wizard one security groups. And in this case, we see that both of those are in this listing of security groups. We'll probably focus a lot on this launch wizard one for the next queries. We'll grab the SG ID for that security group. And we'll use it in our next queries. So something we want to look at is SSH exposed to the internet. So this is kind of a lengthy query. But essentially what it does is it looks for a port range from port 22 to port 22 for the cider block 0000 slash 0 or everyone. And it will output the name and ID of the group set it finds. So we do see this launch wizard one again. We also see an open SSH group. Open SSH, that's probably okay, but we definitely want to go back and verify, make sure it's not going to be a group that should be for only developers or a specific group since it is open to the internet. They probably want to lock that down if it's supposed to be for a smaller group. So let's focus on this launch wizard one. What instances are associated with that security group? What instances or VMs are behind it? So we'll run this command plugging in the SG value that we grabbed and we'll get a list of instances that are associated with it. So we see here there are actually two instances. If you remember from the NACLs, the NACL that we saw before opened up SSH to the world, which is associated with this security group. So if there's any SSH services or services running on port 22 on these instances, that's going to be exposed to the internet. So that's our external attack surface. Some other common issues that we see with EC2 are security groups that whitelist AWS siters. So these are network blocks within AWS. We like to see specific IPs for any instances that you want to be able to access resources rather than an entire net block. If somebody else gets an instance within that block, then that would allow them access to whatever these resources are. Another thing we see is EBS volumes that don't have encryption enabled. So if the storage mechanism for the EBS volume becomes compromised, if it's not encrypted, the data is also compromised. If it's encrypted, it should be safe. So another thing we see is all ports that are open to all. This does not follow the principle of least privilege. So we see specific ports open for services that you intend to have exposed. And then also we see secrets in the instance metadata user data. So if an instance is set up using a script, it essentially gets recorded like bash history. So if there are hard-coded passwords in that script or anything that points to sensitive data, that would be exposed if somebody goes to the user metadata on the instance. So the next service we want to look at is the S3 service. That is simple storage service, and it deals with object storage. You may be familiar with the term buckets. So the first thing we want to do is get a listing of the buckets on the account. So we'll run this query, which will dump out a listing of the bucket names. So we see here we've got a bucket named dev raven. Let's go ahead and take that for our next queries. So we want to look and see if the bucket is world-listable or world-writeable. So can people browse and see that bucket, or can they write data to the bucket? So we'll take the dev raven bucket name, throw it into this command, get some output here. We scroll through and we see that the group global all users, if you're looking in the web UI of AWS, you'll see this listed as everybody. We can see that that has read and write permissions on this bucket. So it is world-listable and it is world-writeable. So next thing we want to look at is objects within that bucket. Are they world-readable and world-writeable? So from the output before, we've got a bucket named test. So we'll query that with this command. And we can see that this bucket policy for test allows a wild card for principles, so all principles, any users, to do s3 get object and s3 put object. So it is world-readable and world-writeable for any bucket objects or objects within this bucket. Next thing we want to look at is versioning for the buckets. Versioning allows you to keep a record of changes that occur on a bucket. So if somebody deletes data, you should be able to recover it if you have versioning enabled. So we'll run this command against dev raven, and we see a status of enabled. So versioning is enabled for this bucket. If you did not have versioning enabled, you would just get no data back from the query. So the next thing we want to look at is access logging for the buckets. So seeing what type of access requests are coming in for the buckets and for the data. So we'll again query dev raven using this command. And we can see here that not only is a logging enabled in the output, but we also get the bucket that the logs are going to and a directory in that bucket. So logging is enabled. If you did not have logging enabled, you would just get no data back from the query. So some other common issues that we see with s3 version buckets that don't have MFA delete. So this is a second step before buckets or data can be deleted. Another thing we see is buckets that don't have the default encryption enabled. This is very similar to the EBS that we spoke about before. Data at rest, if it's encrypted, should be safe if the storage mechanism is compromised. Another thing we see is buckets that allow clear text or HTTP communication. So one of the issues that you could see here is if this bucket is serving software updates to clients, anybody with a suitable position between that user and the bucket could intercept the communication. They could modify the packages as they're coming through and then your user may actually be infected with malware. So we like to see secure communication enabled for buckets. All right, so now we're going to look at Microsoft Azure. Azure is software as a service, platform as a service, infrastructure as a service. It's also available in scalable, very similar to AWS. However, people see it more as a platform as a service. So they kind of handle the OS updates and other things that you would be responsible for within AWS. They kind of manage the behind the scenes stuff. So some of the tools that we're going to use for querying Azure is Azure CLI, which is developed by Microsoft for doing very similar functions as we saw in AWS. Scout Suite, which is developed by NCC Group and also DCIS benchmarks. So account setup within Azure is a little bit different from AWS. They use role-based access control. We like to see at least read access on any of the resources that you intend on auditing. And Security Reader is a good role to use. So credentials, they handle it quite different from AWS, rather. The login flow, you use the az-login command. It will then throw up a prompt in your browser, your default browser. You will log in using your Microsoft ID and then a token will get passed to the Azure CLI client that will then get used for any of your queries. So we'll log in using that az-login command. And there's also a legacy option to use a device code rather than logging in through the web UI. You'll get a bunch of output here. Azure is very noisy with its output. You'll get the cloud name that you've just logged into, as well as the Microsoft ID that you've used and the role type that you have, sorry, the user type that you have. And we can see here our audit account, pentester at sharklasercorp3742.onmicrosoft.com. So similar to AWS, first thing we want to do is grab the account ID, make sure that we're hitting the right account, and then we can use that for any of our reporting. So we'll run this command and we get the account ID. So we'll use that for any of our findings. The first service that we're going to look at is Azure AD or Active Directory. Azure AD is quite a bit different from on-prem AD. They handle a lot of things very different from it. So if you're familiar with on-prem AD, this is a different beast. So the first thing we want to do is get a list of the users in Active Directory. So we'll run this command, which will get the user name and then their job titles. So we see here we've got a system administrator three, probably a good account to audit. And their ID is A-Adams at the same company. So first thing we want to do, once we've got a username that we want to audit, we want to get some additional information about the user. So we'll run this command, specifying the user ID, and we get a bunch more output. And we can see that the user's name is Arthur Adams. Arthur works in the Information Technology Department and Arthur's account was created in March of this year. So pretty new user. Next thing we want to do is see what groups this user is associated with. So we'll run this command, again specifying the user ID. And we can see that Arthur is part of Infotech and SysAdmins. Not surprising. Arthur is a technology worker. And we've got the object IDs also for those groups. Another thing we want to look for is guest users in the account. So these are not managed by your organization. They have been added with permissions to be able to do things on your account. So we run this command. We see that Tammy Rogers 256BQ8 with a tag of EXT. So it shows you that it's an external user has been added to the account. Like I kind of alluded to before, these are users that you don't manage. So any problems with their account, if they have a weak password, it makes it much harder for you to make sure that they are secure. So we like to see specific AD accounts created for anybody who needs access to your environment. If you didn't have any guest users, you would get an empty set. So next service we want to look at in Azure is the Key Vault. So we will run this command. We'll get a list of the Key Vaults that are on the account. So we scroll through. We see we've got this vault name. You also get the location of it and the ID within Azure. So we'll take that ID. And we'll look and see if this vault is recoverable. So if the vault is deleted, can we revert that? And we're going to look for the enable purge protection or the enable soft delete in the output. And so we'll specify that ID from before. And we see we don't actually get data back. So this vault is not recoverable. If it's deleted, it's gone. So next thing we want to look at is the keys within the Key Vault. Do they have an expiration date set? So we'll run this command. It's looking for the attributes enabled and attributes expired. So we see a key here. We notice that it expires January 1st of 2020. So that's good. It does have an expiration date. Next thing we want to look at is the secrets in the vault. So those also have an expiration date. So we get similar output here. We see that this secret also expires January 1st, 2020. So next service that we're going to look at is logging and monitoring. These commands are going to be much longer. You'll see a slide with just the command on it. And then the next slide will actually have the output just due to how we have to query this service. So the first thing that we're going to look at is activity logging alerts for creating or updating network security groups. And we are going to look for the operation name of Microsoft network network security groups write action. And we get some output here. We see that it's set to global. The scope is properly set for the subscription. And enabled is true. So activity logging alerts is enabled for creating and updating network security groups. So let's look and see if a logging alerts are also enabled for delete network security group. Very similar command, but the action is going to be Microsoft network network security groups delete. So we get some output here. We see this is also set to global. The subscription is properly set and enabled is true. So logging alerts are enabled for deleting network security groups. Next thing we want to look at is creating an update network security rules. So these are the rules associated with those groups. Very similar command, but we're looking at the security rules write action. We get some output here. We see that it's set to global. The subscription is properly set on the account and enabled is true. Same thing for delete. Run a very similar command, but with the security rules delete action. We get some output here. We see that location is also set to global. Subscription is properly set on the account and enabled is true. So this next query is going to look for any actions for SQL server firewall rules. Similar command, but we're looking at Microsoft SQL servers firewall rules write. So we get more output here. The location is set to global subscriptions properly set and enabled is true. So alert logs are also set on this action. So for updating a security policy, very similar command, but we're looking at the Microsoft security policies write action. We get some output here. We see global subscription is also set properly on the account and enabled is true. So we have logs there as well. Next service we want to look at is networking. First thing we want to look for is RDP. Is that exposed to the internet? So we'll run this command that will get a list of network security groups and then it will print out the name as well as the rules associated. So we get a bunch output here. We look through and we see that there is an allow rule for inbound connections to port 3389. And the source address is wildcard. So any source address can connect to the service. So it is open to the internet. Next thing we want to look at is SSH. Is SSH also open to the internet? Same command, same output. We scroll through, we see that access is allowed for inbound connections on port 22 for any source address again. So SSH is also open to the internet. So the next thing we want to look at is network watchers. These enable flow logging or insight into network actions that are occurring. So this command will output a list of the network watchers on the account. We're looking for the provisioning state succeeded. And if you didn't have any network watchers enabled, you would have an empty set like this. So flow log retention period is this sufficient for any of the flow logs. So we'll run this command specifying a resource group and a network security group. And we're going to look for the retention policy attribute. In this case, we've got one that has days set to 30. We like to see this at least 90 days. So NCC group scout suite. This is a tool we spoke about in the beginning of the presentation. Scout suite is an automated tool that does a lot of this stuff in mere minutes. So a lot of those commands were pretty long. And I hope they didn't make your eyes bleed. This should be a lot better. Multi-cloud. So right now it supports Amazon's AWS, Microsoft Azure, Google Cloud Platform. And we recently released updates that include Alibaba Cloud and Oracle Cloud Infrastructure. So using Scout Suite against AWS. When you run Scout Suite against an AWS account, you will get a very similar interface to this. You'll get the account number. You'll see the services that Scout Suite looked at. And we've got some dropdown menus up top that allow you to get to different dashboards. Here we're looking at the IAM dashboard. So you see all of the findings that were associated with IAM. And here we've got a finding for password expiration disabled. So we'll click on this. We get a nice output of that password policy. And we can see that Scout Suite has identified that password expiration is false. And it's highlighted that. So you can very quickly go through and see what things are enabled or not enabled on your account. We can go to the S3 dashboard. And we see some more findings here for S3. And we see a finding for get actions authorized to all principles. So this is the readable to everyone. So we click on that. We actually get the listing of the bucket and then the configuration data. And you can actually see down in the bottom right corner, there's a little details button. So you can get the details of the policy itself for that bucket. All right, running Scout Suite against Azure. Very familiar webpage. Here we have fewer services because Azure's API is not as mature as AWS, but they are adding features. So here we've got six or five different services, Key Vault, Network, Security Center, SQL database, and storage accounts. So we'll go up here to the storage accounts dashboard. And we see a finding for access keys not rotated. So if we click the little plus sign over there, it drops down and you get some details on why Scout Suite identified that. So the keys were rotated greater than 90 days from today. And there's also a reference to the CIS benchmarks. So CIS benchmarks. What are these? They are developed by the Center for Internet Security. They are best practice guides that provide step-by-step audit queries. And they also provide step-by-step fixes once you've identified some issues. And they're available for almost any technology you can think of. Red Hat, Server 2016, any kind of technology. They probably have a benchmark for it. And the ones that we use for this presentation were the Amazon Web Services foundations and the Microsoft Azure foundations. These are the cover pages for the PDFs. And they provide a table of contents that is, you can navigate through it. You can click on whatever findings you're wanting to look at. And they provide a step-by-step. On the left here you see the audit for using the AWS CLI. But they also provide steps if you're going through the UI or the API. And then they also provide some references as to why those findings were identified. And on the right here you see a remediation for going through the Azure Web Console and also some references. And that's it.