 All right, last talk of the day. Not the last of the activities today, but we are here in room two for Tret and Thig in the Cloud. So if this is not where you're expected to be, there's still time to move to room one. We're here today with Jacob Grammick, Curtis Armour from East Entire, and I'll let them present themselves. So you have the floor. All right, thanks everybody for coming. The last talk of the day, it's been a long day, but lots of good talks. I hope this is one of them. So as she mentioned, I'm Jacob Grant, and this is Curtis Armour. We're both security strategists at East Entire. We get to do a lot of fun stuff with testing new signal sources that can come into our security operations center. And today we're talking about threat hunting in the cloud. So some of the things that we're gonna cover first and foremost is some definitions of what's a cloud service provider, important things to get out of the way before we get into any more advanced concepts, but how do you take data from a cloud service provider and ingest it into a system that you can do threat hunting on? What are some new areas that you're gonna be looking at compared to traditional enterprise infrastructure and security tools that you may have had in an enterprise infrastructure that do not port well to the cloud or how they interact? And look at some common attack techniques and real life examples as well. All right, so starting off, we're defining cloud service provider as a company that provides something as a service. In the most traditional sense, it's always been infrastructure as a service. You think of a cloud provider, you think of VMs hosted on someone else's hardware, but that's becoming more advanced as you start to look at the big three, so AWS, Azure and Google Cloud, where they have software as a service, you can do functions, all sorts of different advanced microservices that are there. And of course, it's important to realize the reasons for moving to the cloud. The biggest one is probably just the ease of use and the speed and stability of your deployments. It's really nice to be able to spin up what would be a full data center in a couple of seconds. But there's the business side of it too, so changing what would have been a lot of capital expenditure for on-prem servers or putting stuff into a data center, you change that into OPEX instead. So there's a good business case for it. What's on the screen here is the AWS shared responsibility model. So some of you who are more familiar with AWS will probably have seen something like this before. The point of this is to make clear the responsibilities of security between the cloud service provider and the actual end customer. They have a few different versions of this as well. This particular one is for infrastructure as a service. So again, thinking traditional instances in VMs. Good rule of thumb is on the left there. If it's in the cloud, it's probably your responsibility. If it is the cloud, it's probably the cloud service provider's responsibility. So what I have here is just kind of a diagram that shows cloud service provider adoption. I called it the big three earlier with AWS, Azure and GCP, but that's actually not the case right now. This is a survey from about 400,000 Infosec field people from LinkedIn. And they said that Rackspace is actually third right now, but GCP is going to outstrip them fairly soon. For the purposes of this talk, we're trying to stay within the realm of AWS and Azure. We may change this talk later on to include GCP, but we'll try and stay within that realm for now. The important thing to know as we go through this is that a lot of these concepts do apply to other cloud service providers as well. So this is from the same survey. They asked all of the Infosec professionals what's their level of security concern related to moving to a cloud service provider. Shockingly, 9% of people were either not concerned or only slightly concerned. The number of people who have been concerned on some higher level moderately or extremely has gone up since the last time they ran this survey, I think about 11% since then. And also important and relevant to this talk is how well do the security tools that you would normally be using in an on-prem environment translate to a cloud service provider. As you can see, it's usually not good. There are a few tools which do port over well, but the vast majority at least lose some of their functionality or just flat out don't work. Awesome, and then, you go ahead. My life, perfect. All right, so let's take some time and look at some examples of cloud breaches that we've seen related to content service providers. So in most cases, a lot of the attacks that we see are related to people sharing API keys and GitHub repositories or posting something that they shouldn't be posting that someone gets a hold of. So we heard in the last talk, someone gets access to some sort of privilege. They have access to spin up resources. They spin up resources and they monetize on it by using crypto miners. Crypto jacking is what it's called. And we have some cases that aren't explicitly cloud-related, but like Kaseya, for example, it works in a hypervisor type model where it has a web interface. There was remote code execution that was not disclosed and then someone got access to the management interface and then pushed down crypto miners to all the guest machines. So this concept can be applied to cloud, but this is something that we at East Centaur detected because we had endpoint visibility, which is very key and we'll kind of go through those examples. So what are they after? So as we said, most of the time, getting API keys gives them a lot of power, a lot of flexibility within the environment. You know, any access that they can get to the portal gives them inherent access to machines depending on the permission level of that user. So thereafter, credentials, console access, they wanna escalate privileges. So sometimes when you get keys, they're not the right level, but you can use that to escalate to get higher level keys to be able to do whatever you wanna do within the environment. There's also direct access to instances. We're gonna cover the fact that in traditional hybrid environments, we see some clients forklifting old servers into cloud. If you can get direct access to that instance, you're able to harvest whatever data is on that instance itself. So being able to run code or access private data is key for the bad guys, so that they're trying to get direct access to instance. And then obviously trying to get data out to try and extort someone or sell it to the highest bidder. So some examples that I'm sure we're all familiar with here. Uber, they posted some code, which allowed the attackers to gain access to a portion of their infrastructure. It was S3 buckets that they stole data from, and then they took that data, extorted them, Uber paid the bounty as a bug bounty. It all became public, and then everyone was waving their finger at Uber, of course. Another one, Tesla. They had specific Kubernetes that was open to everyone. Someone got access to it. They had access to their S3 buckets, but those keys had access to be able to generate crypto miners in servers. So they were able to spin up assets and then put crypto mining agents on them, and then get money out of that through that method. And again, as we said before, the Kisei breach, not specifically cloud-based, but kind of functions in the same sort of way, they were able to get access to the hypervisor and then push code down to the guest machines across a global client. And obviously, you need to be able to see on the endpoint to be able to detect those pushes down, especially if you don't have visibility into the CSP log level. And this just in, Samsung posted a ton of source code that was public to the internet. Security researcher got a hold of AWS credentials, which had access to their entire repository. So this is very, very bad. This happened about a week ago. And this kind of stresses the fact that access and keys that are getting posted in public areas or that can be scrapes are always gonna be used to leverage access and execute code within customer environments. I'm gonna go through some of the more traditional enterprise security tools that we're all probably very used to seeing at this point. So if you had just a regular enterprise environment, just on-prem, we're not talking about cloud providers at this point, you're gonna see a lot of different devices and it might depend on kind of the maturity of the customer, what sort of vertical they're in. But a few examples, you're more likely to see, obviously firewall, probably a next-gen firewall or UTM. Maybe a network IDS or IPS that might be ruled into the firewall, but not necessarily. So you'll see EDR or PPE agents, which I'll talk a little bit more about in a second. Maybe a sim for logging and lots of other options too. So I have a bunch of logos up here for free and open-source stuff as well as some corporate options as well. So the network IDS and IPS side of things, this is probably one of the, I shouldn't say probably, it is the oldest of the ones that I just referenced. Very obvious in its function, Observer Network Traffic as it's going over the wire, look for either signature-based matches of malicious activity or anything that might constitute some sort of anomaly and then either flag it if it's an IDS or interrupt it if it's an IPS. There's a few different ways of doing that, just call it here if you're in line, of course it's easy to just cut the connection, but if you're out of band you can do things like TCP resets as well. So endpoint protection platforms and endpoint detection and response. So you see these thrown around a lot, they're kind of converging now as well, you'll see some providers trying to merge the two functionalities together because it kind of makes sense in a way, but if you're referring strictly to EPP, you're talking about interrupting code that's being run immediately and if you're talking about EDR, you're really talking more about a telemetry side of things. There is some overlap between the two, but that's their fundamental difference. And of course the SIM, so these have usually been used for compliance reasons, so if you're like PCI or SOX compliant, you would want a SIM to make sure that you have all of your data to meet those frameworks. But now we're kind of getting to a point where these have some really good advanced applications for threat hunting and event correlation. So this is kind of where we're gonna play in the stock is in the SIM and going forward, it should be probably your one stop shop, not one stop shop, but should have a lot of your information for any incident response activity you're doing. So taking all of those and rolling them together, I've made a very oversimplified enterprise diagram, not too worried about users here because we're talking about things that are gonna be ported to the cloud as instances. Not using any of the built-in services like RDS for Amazon, just instances themselves. So if you forklifted everything from your environment, this is what we'd be looking at. You see your web servers front end, database backend, all of those servers have some type of endpoint agent on them. You can see kind of in the middle on the bottom there your IDS slash IPS and your SIM on the bottom left. So how do we get those into the cloud? Here's the exact same architecture. This is within AWS. So first off, what's different? The SIM log sources are different because we're not getting logs from the network equipment that was controlled by the customer anymore. The router and internet gateway that are listed here are abstractions by AWS. So you're actually getting the logs from AWS itself through CloudTrail or VPC Flow Logs, which is the other one that's listed there. And as I mentioned, those are no longer controlled by the customer, so that's the other difference. What's missing is our IDS is gone. How do you get an IDS into this architecture if the router and internet gateway are abstractions when in the enterprise environment you'd have like a span or a port mirror there or maybe the IDS just sits in line, but you can't do that anymore. So what do you do? There's a few different options. You can have the IDS act or be an instance that acts as your internet gateway. So you're effectively routing traffic through it for analysis as it's on its way to the internet. There are lots of options for this and you can kind of get this from an image for firewall, like there's in the AWS marketplace, there's Palo Alto, there's Checkpoint, there's all sorts of stuff you can choose from and you can get some of the same functionality from that. You could also do something like just have a Linux instance that acts as the gateway and you can do a lot of your detection on that instead. So the problem here comes with scale. Since you've kind of pulled the responsibility of that internet gateway down into an instance, you now are responsible for more aspects of getting the network traffic out of your network. The next one is fairly new which is tap agent. So this would be, there's a few different vendors doing this right now. In Microsoft, they have something called a VTAP or if you're looking at say Gigamon, they have a solution for it as well. But basically what it's doing is it's just taking the traffic and offloading it through other means rather than doing a port mirror or tap over to an IDS or IPS instance. So this is pretty good and I think it's probably gonna be the best solution long term but right now the costs depending on the vendor that you go with are pretty steep and it's also not portable. So if you're going with AWS and say you wanted to switch to like digital ocean or Rackspace, that solution may not follow you there. There's a few other options too. The craziest one on the list is running IDS or IPS on the actual instances. So this would mean running something on your web servers and your database servers to do the analysis. It's extremely expensive, I don't recommend it. You can backhaul traffic to where there is an IDS, not really a great solution for anything that's public facing since now that has to route through your on-prem all the way up to AWS or go without. So this is an argument that I'm sure we're hearing more and more which is that network based monitoring is becoming less effective the more traffic is encrypted which is true but that's not to say that it's useless. There's lots of things even if it is encrypted traffic that you're gonna be missing out on if you decide to go without. So things like TLS fingerprinting and some other functions as well. So it's important to kind of plan according to your needs if you don't feel that that is part of your environment or you think that's a requirement. Endpoint and some of the other logs I'm gonna talk about in a second should cover off everything pretty good. So talking about cloud service logs from the services themselves. So AWS has something called CloudTrail. You can configure it by region and it has some options for what you wanna see if you wanna see only reads against your infrastructure or rights or both. I recommend both because why not. You can log to S3 or to Lambda which is their function service. So if you wanted to do something special with the log as it comes in you can do it that way and importantly VPC logs which is basically NetFlow is logged separately. You have to turn that on through a different mechanism. Azure is fairly similar in the sense that they have their activity monitor. It's turned on by default. You can choose where you want to have it stored. You can go to a storage account or a service called Event Hub. And in the same way their network logs are separate but they're done through network security groups as opposed to VPC. So before I get into some of the potential attacks against the cloud it's important to understand one aspect of it which is IAM. So this is where you're gonna be controlling a lot of your users and your permissions for who's allowed to do what within the cloud service provider. You can also assign permissions to instances themselves if they need to access other components of the cloud service provider. I also have a snippet just kind of so you can get an idea of what VPC flow logs look like in AWS. So it's got a header there with all the different details and it just kind of spits it out as one line per connection there. Also this is really probably more useful for folks who are watching this on a recording but some easy translation between the big three. They all have a lot of the same functions but obviously they call them all different things. So this is just a quick reference guide. I've got a link there as well if you wanna check that out. So some attacker tools. We've been doing a lot of testing with AWS and Azure. These are a couple of my favorites right now. So there's a tool called Paku which is named after a type of Piranha in the Amazon which makes sense. Or Microburst for Azure. And some of the things you can do with these I'll show you in a second but just in general some of the things you're gonna wanna look for. And I did a lot of this testing just in Elk. It has input plugins so you can pull straight from Amazon S3 or Azure Storage Accounts and get the logs that way. So brute force of any of the services. So S3 buckets, Azure Storage Accounts that's literally going through and using a dictionary attack to try and find the names of different resources within your environment or accounts themselves. If someone has a foothold already and they have a user but maybe not a lot of permissions they may try to brute force their permissions and it shows up pretty easily. I'll show you on the next slide what that looks like. They might list metadata for both Azure and AWS. They have metadata services which record details about instances and permissions and other code that they're supposed to run on boot which is also interesting. And there's exploits that both of these can run. So Paku for instance has this really fun module in it where you can set up a Lambda function to backdoor every new IAM user that's created by getting their SSH key and posting it to a random web server somewhere. Lots of different options. I really encourage everyone here to kind of check them out if you're interested. What I've got on the screen now is an example of a permission brute force using Paku. So start on the top left. This is just a visualization that you can easily recreate in Elk. It's the specific user that's making API calls. Curtis is on there but more importantly Paku test is on there on the far right. They go up to a hundred. Stop throwing errors. So the bottom left is all of the different calls that it made. And you can see no one call makes up the majority. It is sorted by count. So that's pretty consistent with someone trying to see what permissions they have for every call they can make. And on the right there's a call. This is the actual log as it comes through. It's in JSON format but they used a dry run operation. It's a flag you can set when you're making these calls. There's not really a lot of practical use in a production setting for them. It's really either used for testing or this in my experience. This is another fun one that really illustrates the importance of collecting these logs. This is indicating that an image was taken from Amazon and shared with an external account. What that means is basically they're allowed to boot up that image within their own account, do whatever they want with it and you will be none the wiser aside from this log. You get one chance to catch this. All right, Curtis is gonna talk a little bit about endpoint logs in those cloud service providers. Yeah, so how can we use logs from the endpoint to provide better visibility? So for us, what does endpoint visibility mean? So we're talking strictly about VMs. We're not talking about microservices. We're talking about the use case when you're moving some server infrastructure into the cloud and it's running some sort of business function. And what organizations need visibility into the cloud endpoints? So if you're running any business function, anything out of the cloud that has proprietary information, something that a bad guy could get and utilize, you wanna have visibility into your actual endpoints. As we'll go through, you only get a certain level of visibility from the cloud service provider itself and you don't really get visibility into the actual endpoints that are running code. So what's the difference between on-prem and these cloud endpoints that we're talking about? There's nothing. They're just virtual machines running in another spot. The thing is that there's more inherent security because it's deployed generally in a zero-trust model, but when people are forklifting applications and servers from their on-prem environment into the cloud, we come into the same problem where they can talk to each other. There is not that inherent security anymore. So we have to have visibility into what's actually executing and know what's happening on those endpoints. What is required to hunt? So how are we supposed to hunt for bad things in the cloud? So what we talk about at East-Sentire is raw telemetry. So we like getting everything off of the endpoints, being able to pull that in and then be able to do hunt specific for tactics or techniques. Being able to get everything and not dropping anything gives you the ability to go back and look for things that may not have been known before and gives you the full visibility into being able to hunt for certain types of threat actors within a cloud environment. And what's the classic misconception about cloud security? The classic misconception is that it's on the cloud security provider. As Jacob alluded to earlier, if you're running the instance, you're supplying the software, it's up to you to protect those endpoints. It's not the cloud service provider's job. And what are the risks of running workloads in the cloud? The main risk is if you don't have visibility and what's happening on your endpoints, is that you're not gonna know when something happens. You're not gonna know when a breach happens. You're not gonna know when an incident happens. There's a lot of risk in insider threats within the cloud. So unless you're getting that data, you're gonna kind of be SOL, right? You're not gonna know what's happened historically if you're not collecting all that data originally. So kind of shifting to the whole forensic view of this. When dealing with an incident, you need to have that data already collected. If something happens and you don't have endpoint telemetry, you don't have endpoint visibility, you're gonna be going back and looking at artifacts of something that's already happened. We wanna collect all that data upfront and then be able to go back and look at it in a historical view. One of the things that's an issue is that we're not able, in most cases, to get access to the actual hard drive that's running those virtual machines because it's distributed and they're in giant data centers. Getting that for a court case, for example, is very difficult. The best chance you have is being able to log every single thing that happened on the endpoint, storing that data and then being able to use that in a case saying, hey, all this stuff happened, I have the data, I have the proof, and this is what I believe to be the case. Another thing that you can't do is memory analysis. Unless you have some agent on the endpoint and you're able to query and dump memory, you're gonna be useless for trying to dump specific contents of an instance unless you can do it at the time of the incident. And again, central storage is key, so getting all that data off the machine itself stored somewhere else that's secure and archived so that you can go back and review it. The problem is if the data is stored on the endpoint itself, someone gets access to it, they can wipe all the data off once they execute their task. So we wanna make sure that we're getting all the data out so that if something happens to the instance, we still have a history of what happened up to the point that it got destroyed or the data got wiped. Another issue we have to focus on is the east to west problem. This is not an issue just with cloud itself. This is an issue with all on-prem instances. If you do not have east to west visibility, you can't watch or hunt for lateral movement across an environment. So we use endpoints to get that visibility from a lateral movement perspective. If one machine becomes compromised and then it pivots to another machine, if you have endpoint telemetry, you have the ability to follow that through the entire attack chain. And as I said, if you have all the telemetry, you can follow the entire attack chain, you can see where the content's dropped, you can see everything that the bad guy did as long as it touches disk. And in some cases, you can see portions of the memory attacks as well, but without endpoint visibility, you're essentially blind. And then as Jacob mentioned before, the whole encrypted traffic conversation, everything that executes on the endpoint has to be decrypted at some point to be able to execute code. So if you have that visibility, you can tell the entire story of how it got there, how it executed, and how it communicated out. So let's talk about some ways that you can execute code. So in a few slides, we're gonna go through the main ways to execute code from the top level down through APIs. So I always have a spiel on PowerShell visibility. PowerShell is very powerful, obviously PowerShell. So there's a ton of great resources out there on how you should be locking it down. Microsoft put out a great post a few years ago on how to lock it down. There's countless conference presentations on hunting for PowerShell, detecting bad PowerShell within environments. These are some of my favorite resources that I posted up here. And as we know in the attacker community, people love to use PowerShell because it's so easy to evade traditional EPP solutions. So what can we do about getting visibility into PowerShell? So there's a bunch of different configurations that can be enabled on the back end, script block logging, module logging, and transcription. In my personal opinion, script block logging is the most useful out of all of them because it gives you the decoded PowerShell after it's run through the interpreter. So even if you get a blob of encoded encrypted PowerShell after it executes, it has to decrypt all that to know how to run it. So on the back end, you get the transcription, the script block logging logs, and you can see exactly what the attacker did. The biggest problem that we have is that if you execute scripts from the PowerShell, you don't get visibility into what actually ran. All you know is that it ran a script, stuff happens, and that's great. As a defender, I need to know exactly what was executed, how it was executed, and what it did. So we need to get visibility into that. And again, I can't stress this enough. Getting that data off the endpoint is key because if someone gets to be able to execute some code, they're essentially gonna be able to tamper with the data that is living on that machine. If you're not forwarding it out, you cannot trust that data, and it becomes null for an investigation. So script block logging, super easy to implement. I recommend this is implemented across all infrastructures, on-prem, cloud, wherever. Anything that's bad related to PowerShell will have clear indicators on the other side of the script block logging. So you should be collecting this, and then if you are looking to do threat hunting, you should be using the decoded data on the back to look for bad things that are happening within the environment. And what would be an endpoint talk without talking about MITRE attack? So MITRE attack is a great framework that we can specifically put attack life cycles and categories on the endpoint. This is a list of all the known public attacks that we've seen that pen testers use, that threat actors use, building hunting capabilities around this framework is key to being able to detect bad guys in your environment. It's really, really important. And it's one of the best published frameworks, in my opinion, for doing endpoint security hunting. So with that in mind, what type of data sources do we need to get out of instances to be able to do this sort of hunting? So everything related to endpoint detection and response is specifically all around these data sources. So file monitoring, process monitoring, process use of network, all of MITRE specific to endpoint can be mapped back to these data input sources. And because we know there's a lot of students here, how can we do this on the cheap? Of course, there are some open source technologies that we can use. Sysmon system monitor has an awesome coverage skew, and we'll go over this really shortly within the Windows space. OS Query for making active queries to be able to map to MITRE. And there are projects out there already that have OS Query queries that map to specific MITRE techniques. So I would encourage you checking that out. Obviously, PowerShell logging, that's free. All you gotta do is collect it and take it out. And on the Linux side, again, OS Query, there's AuditD. AuditD, there's a project out there that's mapped specifically to MITRE attack. Great project. Again, encourage you guys to check it out. And then Logstash AuditD, which is AuditD on steroids. So talking about Sysmon, what features and capabilities do we get? So we actually get quite a bit. There's a lot of coverage that we can get out of Sysmon natively. There is a lot of different configs that have been put out by security researchers. Taylor Swift on security, and CyberwarDog and Olaf. There's a ton of good configs that are mapped directly to MITRE. And there's also free elk mappings as well. Jacob talked about it earlier, but when you're taking in data and mapping it to MITRE attack, SIM can be really powerful for that. And there's a lot of free open source frameworks to be able to utilize that type of data. And shout out to CyberwarDog. This is probably a little bit old, but this is the MITRE attack coverage, JSON navigator blob. So you can see kind of the portions, if not full coverage of components of the MITRE attack framework, that Sysmon has the capability of giving some signal source to be able to provide detection. So we're going to move on to actually executing code on endpoints. So this is mainly going to be around getting code to execute once you get access, some sort of access to content service provider or cloud service provider, sorry. And we want to focus on top level down. So kind of the thing I like to drive home is these VMs in the CSPs are just VMs. So all the normal ways to execute code are there. Shell code, script interpreters, binaries, it's all the same. They just have more inherent security of getting that code executed. So from getting access to keys, API keys, you have the ability to push code down. So we're going to cover that on both Azure and AWS. And then obviously there's the classic way of if you're hosting a server in the cloud, you're running a web application and it's affected by remote code execution, that can still be targeted. The person can get a shell at that level, escalate and then be able to try and pivot internally. So we're going to focus on mostly the API to endpoint. So in the Microsoft Azure space, what are the main ways that you can execute code from API? So there's run command, there's custom script extension, there's hybrid runbook worker and serial console. We're specifically on this talk going to talk about run command and the custom script extension. So run command is a way to execute code across your VM infrastructure. It's used for tons of legitimate purposes. You can do it through Azure portal, the REST API, Azure CLI or the top level PowerShell commandlets. And it's typically used to update or install applications. So of course, great functionality thanks Microsoft. So it also requires a certain level of permissions. So you need the ability to hit the run command, action permission and you need to be able to have access to the virtual machine itself. And what's interesting is that every single one of these drops the actual script to disk. So everything that's run is dropped to disk. So that's great, right? As a defender, a copy of that script is super useful for me. So this is kind of the flow that we follow. We have the run command extension, or sorry, the Windows Azure guest agent, which invokes two commands prompts, which then invokes the run command EXE, which then executes the PowerShell. So as you can see here, when the PowerShell is actually called, you know it's calling a script because we're dropping the script to disk, it's calling the script and then it's executing it. So we don't get much visibility here, right? Like what did the script do? But the good thing that they drop all the scripts to disk, so then as a defender, I have the ability to go and I have the ability to look at the script. What was executed, the content that was provided, all this great stuff. But you can just append delete on the back end and then it deletes all the content so you can't review it. So you're essentially blind when that script executes because all the content that was stored, if you weren't logging at the time, is deleted. So there's nothing really to see here. Ah, but let us go to the logs. So if you have script lock logging enabled, you're actually gonna get the content as it's executed. So in this specific example, I turned off Windows Defender, great. And I also deleted all the script content so you can let your minds wander and what could be done here in this type of situation. So again, logging is really important. The Linux flow is a bit different. Obviously, bash, run command, bash, and then execute the commands. Again, the scripts are dropped to disk, but again, we have the capability of deleting that if we just append content onto the back end of it. And then on the custom script extension side, very similar, it's used for legitimate purposes. It requires a certain level of permissions, very similar to the run command. It's just executed in a different way. The scripts are also dropped to disk here. If you're trying to do something bad, there has a timeout on it. So if you're sending a reverse shell built in through PowerShell, that type of thing, that's gonna die after about 90 minutes. So you'd wanna create a scheduled task or something to get some persistence on the box, presumably. And this is the workflow for the custom script extension from the portal itself. Of course, this is gonna be done through API. So when you upload a script to be executed, it gets uploaded to a temporary storage container. When it executes, you get the standard error and standard out from the script execution. And it has a very similar flow to the original run command execution. So it's the guest agent, which invokes command, which invokes the custom scripts interpreter, which then executes the PowerShell. And again, we're kind of limited on what we see here because it's executing a PowerShell script. It's not a PowerShell one-liner. So logging is really important. And of course, it's dropped into a different directory, but we know where it is, so you can expunge all of the proof of something running within the environment unless you're logging at the time of execution. And then an example of how this runs from a custom script extension perspective on Linux. Very interesting, it actually gets done through Python. So Python calls bash, which calls another Python interpreter, which then executes the script code. And again, it's all done through script files. And this also drops at the disk and we also have the capability of deleting it again. So moving on to the AWS side, so there's two main methods of doing execution, code execution from the API down, which is AWS system manager, which is run command. And then there's also user data and metadata execution. So very much like the Azure side, it works the exact same way. It's legitimately used to install and update packages. The requirements are a little different. You need the Amazon EC2 rule for SSM and then you need access, the user needs access with the API key needs access to the actual instance. So the workflow for this is very similar. The SSM agent executes PowerShell, which then calls the PowerShell script. Again, because it's a script, we only get the visibility into the script executing. We don't actually see what the code was on the backend. This also drops at the disk. So we have data on the disk itself, but if the bad guy appended a delete function on the ends, we'd be screwed to be able to find that. So in summary, logging activity from the CSP APIs, microservice and endpoint logs is key for visibility. We can't stress enough that if you're not getting visibility into your endpoints, you have no idea what's executing because the CSP only tells you that content is executing not what it is, which is a serious problem from our perspective. Always enable script block logging across any environment, on-prem, cloud, wherever and suck that data in and experiment for yourself. So Azure, GCP, there's a bunch of different free service offerings for just to be able to test functionality, test features to see how it works, what can be used and different functionalities. So we recommend that and Elk as a free tool has a lot of power and there's a ton of content out there from other security researchers on how to leverage it to do threat hunting. It can be a really powerful tool that's very cost effective in the long run. Anyways, thanks everyone for listening to our talk and we'll answer some questions.