 I'm really looking forward to this talk. If you've been to a lot of talk so far, you know that they have all been pretty technical. My understanding is we are going to get some entertainment and some jokes and some stories from Ben here. So let's get excited, give Ben Morris a big round of applause who is going to talk to us about AWS. Have a great time. Hey everyone, how's it going? You guys having a good time? All right, so am I. So am I. Thanks for coming in today. I know you guys have probably been waiting in a lot of lines and yeah just really appreciate you guys being so excited coming here to see this talk. Put a lot of work into it. So thank you. And we're going to be talking about a lot of cool stuff today. We're going to be stealing lots of secrets and we're going to be hacking AWS and showing how I did it all and then talking about how we can basically help fix the issue. And if you just give me one sec to actually get my timer started. Okay, perfect. So yeah, thank you very much for coming in. Just a little bit of a disclaimer before we get started here. Please do not arrest me FBI. No post exploitation was performed and everything I found was basically publicly available already. I'm not going to be talking about any AWS zero days or any exploits in customers specific software. This is just a widespread misconfiguration issue with AWS. And I'll talk about that a little bit more later. But basically you know I was just kind of driving down the road and I looked out the window and said to myself, huh, a lot of people's disks are on fire. I should probably you know just call the police and let them deal with it and I just kind of kept driving. So didn't do a lot of post exploitation and I definitely stuck to you know the look but don't touch. But anyway, so what is EBS? EBS stands for elastic block store and it's essentially a virtual hard disk that you can attach to a VM. So anytime you spin up a virtual machine inside of AWS, it's going to have a disk that's automatically provisioned to it. And that disk is an EBS volume. They can vary in size and the default is like 8 gigabytes but basically anytime you start building an application using Amazon EC2, you're going to be running one of these EBS volumes. So they contain your application code, your data, and everything else you would want to deploy. So these volumes, they can basically be detached and reattached to various machines. You can move them around kind of like network attached storage in a way. So they can, you can clone them, you can delete them, you can copy them, you can do everything with them that you would expect. And they come in generally four flavors for security purposes. They come in unencrypted, encrypted, and public and private. So you can have a combination of them like public, unencrypted, private, encrypted, public, encrypted and what not. And we're going to be looking at the public and unencrypted ones today. Those are the ones that are interesting and those are just the fun ones that have all the credentials in it. So if you have an encrypted or private disk, they're not really vulnerable. And also these disks by public or by default are private. So when you do spin up that instance, it is going to be backed by a private volume which kind of made this vulnerability really interesting to me. I wanted to know who is out there exposing their disks to the public when AWS actually makes it pretty tough. You know, you have to go into a separate menu after you create the snapshot and after you create the image to actually go check that public box. So I was really curious to know, you know, who is out there doing this? So what could possibly go wrong with an unencrypted and public disk? Well, basically back in January, I was at an on site for a client and, you know, I was just really jet lagged and not able to sleep. I'm sure you guys have all been there. It's like, you know, you're on the west coast and you got to fly all the way to the east coast or you're on the east coast and you got to fly to the west coast and you get to that hotel and you sit there and you're at the bar. But there's nothing to do. You know, you're just stuck there. And then it's like 1 a.m. the bar is closed. There's nothing there. And you basically just say, well, I'll go set on my computer. Fine. So I thought to myself, well, I'll look at this client's cloud security controls. That'll definitely put me to sleep, you know. Except that it really didn't. I basically found an unencrypted EBS volume that was public on their account. And I really wasn't familiar with this vulnerability. So I just did what everyone else does and I just Googled it, you know. And I found some blog posts. But there were only a couple of them and they didn't really talk about this vulnerability very much. They basically said, oh, well, that's bad. But just don't do it. And it's fine. Which is, you know, it always just piques my interest. I'm like, okay, well I got to know more now. So I basically took this client's disk and I went through the 27 steps of attaching it to my VM through the console and mounted it. And I realized that this client had made a copy of their entire web application available to the public internet. And this basically had everything to run the app, including their AWS access keys, their AWS secret keys, API keys for third parties, database credentials, because, you know, of course, and of course the database was exposed to the internet because why not? Of course you'd want to do that when you have AWS, you know, that's totally normal. So after I discovered that disk and basically had this incredibly critical finding, I knew I basically had to investigate more. I needed to know, you know, how widespread was this bug? Because this bug was really powerful. It had, you know, the keys to the kingdom for this whole application. So, you know, I started doing some digging and I wanted to ask myself, well, why does this happen? How can this happen? And basically, there's two screen shots here. You want your disks to look like the top screen shot there. If you can see it says there's a, this snapshot is currently private and that's what you want it to look like. By default, that's what it will be. But if you go into that tab and you change it to public or use the API, you have some kind of broken API code that ends up setting that snapshot to public. What happens is it shows up in that nice search box down there at the bottom. And this search box is wonderful because you type in the word Jenkins and Jenkins servers come up and you type in the word backup and backup servers come up. So basically at that point, you know, I kind of realize like, whoa, this is really cool. Got something here. And basically, when you do set that public tab and it shows up in that search box, if it does have sensitive information in it, you have to assume it's compromised at this point. Because anyone can go search through it. And I don't know if you guys have heard about like the Capital One stuff that just happened recently or any kind of other like S3 bucket exposures. This is kind of a similar vulnerability to that. In that it's, you know, it's kind of going through someone's private data storage that they think is private, but it's really not. And one cool thing about this bug is all of the snapshots are queryable and you can pull all of the IDs back from the API. It's not like an S3 bucket where you would have to start guessing people's bucket names to try to find one. And if they set something like a GUID for their bucket name, you're not really going to be able to find it. So this, this made it really, really fascinating and really cool to me because you could basically just start going through all of their stuff in a programmatic fashion, very easy. And you know, even if you just want to use the web GUI, you just start, you know, typing in stuff in there. So at that point I was just like, this is awesome. And so let's just talk a little bit about what I found. Because everyone likes loot and yeah, that's what you're all here for. So what did I find on these buckets? I found a lot of stuff. So I'm going to give, first I'm going to give three examples of some critical exposures that I was able to find. And then I'm going to kind of talk at a higher level and talk about overall trends and some more stuff that I did find. So the first example I'm going to talk about is about some robots. And I like robots. Robots are great. They're our friends. And if you think about robots and service accounts in your own organization, you may be thinking about, you know, your Slack bot or, you know, other chat bots you have and think about what those robots can do. They can do things like push code or, you know, deploy new builds. They can do a lot of stuff and have a lot of access. But usually there's this, you know, interface between you and the bot like some kind of chat or something that lets you, you know, that delegates permissions. So this case I was able to find these credentials in this user data.config file on this random disk basically and I was a robot. So what could I do with robot? Well, I didn't have any permission restrictions and, you know, the ability to, you know, deploy stuff seems pretty great. And when I started looking at the disk one more thing. This output right here is basically the equivalent of who am I for AWS. There's one command you want to run whenever you find a set of credentials. It's called STS get caller identity. And that basically is like who am I. Anytime you have a set of any credentials you can pretty much always call this API endpoint no matter your region and it will come back with, you know, who you are. So this is just a simple listing of, you know, who I am. And so I started looking through this disk to try to find clues. One thing that's interesting about this is you always got to have like the scavenger hunt to figure out who owns the disk and who, you know, who owns it. So in some config files that these creds were near there was some database configs with some internal URLs and some just some domain names. And these domain names led me to a pretty cool company. And the company ended up doing a lot of really interesting things like tracking ISO social media requests and posts. And they did things like record border interdictions. And they were basically software as a service company that sold pretty much exclusively to the government. So they're just doing government stuff and their robots keys are just sitting out there for anyone to go grab. So if you guys wanted to, you know, read up on what ISIS is doing, you know, on social media, these are the guys you want to talk to. So, you know, at this point, I basically like shit my pants and it's like, you know, what am I in control of right now? So, you know, we reached out to this company and they were, you know, of course very grateful and they, you know, super positive response. And they gave us like some remediation steps. So they really liked this. And yeah, this was this one just kind of highlights this problem entirely. Like it's just, you know, you could find anything out there and these accidental exposures could contain anything, really. So it was really interesting just kind of going through them all. The next set of credentials I want to highlight is something I just call Woot Woot. And basically, there was a disk with a Docker file. And if you guys are familiar with Docker, it's just a way to manage your infrastructure, basically. And there was some other code around there. There was like a Golang program that was compiled. And then there were some kind of scripts. And it looked like they were mostly for system administration. It looked like this thing was responsible for, you know, spinning up infrastructure. And the one thing that I couldn't really figure out was who actually owned this disk. The config files didn't really have any clues. There weren't any domains. It was all just like internal 10 dot addresses. So it was just kind of like, okay, well, I don't know who owns this disk. But what I did know was, you know, which account I was and who I was and who I was was root. So just out of thin air, I was able to grab some root credentials for this account. And if you're not familiar with AWS, root is basically God permissions on an AWS account. It has unrestricted access. It's an administrative account. And you're actually not even really supposed to use them. You're kind of supposed to delegate an account or you're supposed to create an admin account and delegate admin permissions to it. So that way you're not directly using the root account. But these guys thought it was a bright idea to just start using that root account. And they said, oh, well, no one will ever find this disk. You know, it's just some internal thing that spins up infrastructure. No one even really interacts with it, except me, you know. So the, you know, the highlight here is just, you know, you could find everything. This was actually the only set of root creds I found in the disks. So that was kind of cool. I honestly wasn't even expecting to find it, just because I think everyone kind of knows not to do this now. But this just highlights again, just the critical nature of these disks and what they contain. Because a lot of people just aren't expecting you to be able to get access to these ones. So the next one I want to talk about is a little, it's about a little piece of software I love near and dear to my heart. It's Jenkins. And if any of you guys have, you know, owned some Jenkins machines out there, you know why I love it. It's always full of credentials. It like has access to production source code and it can like push builds. People do all kinds of crazy stuff in their Jenkins jobs. So, you know, anytime you come across a Jenkins server, it's just great. You know, tons of stuff. So in this case, I found a Jenkins server and it was basically, it looks like a developer instance. It looks like to me some developer was trying to get an internal application to work with their own Jenkins setup. So they kind of like spun up a copy of their Jenkins server and were trying to, you know, get an application to work properly with it. So, you know, in the Jenkins server I found some AWS credentials and I popped them in to the STS get caller identity and I found out I was a dude named Kumar. And I thought, wow, that's great. I'm Kumar now. Sounds good. So I looked in the users.xml file, which is, if you're not familiar with Jenkins, that is just the file that holds basically all of your user names and passwords for users on the machine. And that's kind of assuming there's no single sign-on in place or no active directory integration. But so this user.xml file was kind of funny because I looked at who made the server, who had the admin account and their email address and it was definitely not anyone named Kumar. So, you know, it was kind of funny. Some guy sat there and first they exposed their disk publicly, which was really bad and they exposed their AWS credentials. But then they like also framed their co-worker somehow. I don't know why. Maybe they were trying to frame their co-worker. I don't understand. But yeah, so Kumar, we got his keys and started looking around, trying to figure out who this was. And from the email address we were able to determine it was a software company and the software company, I can't name them but I can talk about who they do business with and from their website, you know, these are the people they work with. They work with Salesforce, Apple, FIS, a lot of other like Fortune, you know, Fortune, whatever, end companies. So, you know, this is a large software kind of consultancy firm and they just did, you know, a lot of cool stuff. But, you know, these keys are just, they're just sitting out there and they're keys that could potentially impact these other companies who, you know, I'm sure Salesforce and Apple and all of them, they have, you know, very good perimeter security and they're making sure that they have, you know, a tight leash on their developers so they're not doing this kind of stuff. But in this case, you know, you could almost have a compromise happen because of a contractor who maybe, you know, only has indirect access and, you know, are you really watching that whole supply chain of, you know, your contractors and who you're actually doing business with to make sure their security is also not weak. Because in this case, you know, the compromise could, you know, could definitely lead to some pretty severe consequences just with the amount of work this company does. So, you know, overall these kind of three exposures highlight the critical severity and just the kind of stuff you're going to find when you come across these disks. And all of this just makes sense, you know, these are just like people's application servers. It's just a lot of developers kind of, you know, doing whatever and trying to just make their stuff work. So, you know, overall we had, these are kind of the things I was looking for. When, when you get into a disk you find a lot of leaked source code, of course, because they're, you know, mostly application servers. So a lot of people are doing AWS right. And they have a set of temporary credentials which allows their credentials to basically expire. So, you know, if you don't get access to those credentials within about 24 hours, I think is the default, you know, those credentials rotate out. So, you know, that's good. But even if those credentials rotate out, you're still going to have that source code laying around on someone's disk. So, you know, we found a source code for some government contractors, some large tech companies. And a lot of these are just like boring internal applications. But a lot of them also give really good insight into how these companies operate, even just having their source code is really dangerous. You know, we got like source code for a bunch of internal applications to like host like huge databases for some like tech companies and stuff. So it's just like really, really cool source code, even if that's all you get. And, you know, at the end of the day, it's just a medium, kind of like a medium risk finding. But another thing we got was tons of private keys. I know it's SSH up there, but just lots of private keys think about like TLS certificates and whatnot. We have just like tons of client and server keys. And anytime, you know, you're using one of those exposed server keys. You can be man in the middle. And some of the client keys we got, you know, just allow SSH access. You know, you're like going through people's bash histories and trying to figure out, well, where are these IPs? You know, who's running these servers? Try to figure it out. So, you know, that was a little bit harder to determine if those creds were valid. So a lot of the SSH keys and whatnot, we just directly handed over to AWS and just responsibly disclosed it with them to make sure they got, you know, word to their customers, hey, your keys are just sitting there. I should probably do something about that. So another thing we got a lot of, which was kind of surprising to me, was like SQL files that contained a lot of people's personal information. And I think a lot of these came from developers. They would, you know, borrow some data from production, move it down to a development environment and so they can play around, debug their application, do whatever they needed to do, but then they kind of left their disk out there just sitting there and these SQL files contained like thousands of people's, you know, usernames, hash passwords, email addresses, phone numbers, all of that stuff. So just some like really nasty hygiene around like SQL files and just kind of all of that. And another thing we got a lot of were like WordPress installations, which are pretty cool. If you get a WordPress backup, I should clarify that, you know, the WordPress like some of the things we found were WordPress backups actually. So in the backup, you know, you're going to have like the database, which would be basically a SQL file and that will contain like all the password hashes and also like API tokens for third parties, which are always great to find because they allow you to escalate further, do, you know, more privilege escalation kind of in their own environment. You can potentially start taking over more and more resources. So, you know, finding those API tokens was also really, really great. And a lot of just kind of like off the shelf software, it seemed like a common pattern would be like developers again, kind of like just doing a bunch of dev debug work, pulling down some stuff like a Drupal instance, throwing it up there and then just leaving it there. So a lot of those kind of credentials were laying around. And also VPN credentials. Lots of open VPN creds and some of them were for legitimate companies who, you know, were using it to access their internal network. So, you know, at that point, you know, it's pretty much game over because that's, you know, it's kind of one of the big goals of an external attacker is to get internal network access. So that was really cool. But also, I found a lot of people had their like hide my ass creds and other like VPN providers just sitting out there on these disks. And that made me really, really curious. Just like what kind of attacks could you accomplish? You know, you could definitely just, you know, abuse them but could you also like maybe man in the middle of them when they think they're on their VPNs? I'm not really sure. But I thought that was really cool to find. And also really curious, you know, people are kind of automating their hide my ass setups. And then also, just in general, we just found lots of AWS keys, Google OAuth tokens, you know, third party API tokens, email passwords. Think about like your SMTP creds, your web apps are using SMTP to send mail or mail gun. Like one thing I, one app I specifically remember very clearly is something called, it was just called surveillance app. Like that was the repo name. I found there like that get directory is just like surveillance app. Okay. And so I looked at it and it's just a bunch of code that just takes raw RTSP streams and just dumps them into S3 buckets. So, you know, whoever or whatever this thing is surveilling, I have all of their keys now. You know, I can go read their buckets. I can go, you know, look at who's, you know, who's being watched and I could basically surveil their surveillers. And that one was so hard to like not touch. I did not, you know, like it's so hard to just not touch this stuff and want to just explore. So that one in particular, you know, you like want to see like web camera roulette, you know, you always want to see what's behind that web cam or like, you know, one of those crappy cameras that, you know, is just on the internet. So that one in particular was just kind of funny to me. It was really hard to just kind of bite my tongue there. So yeah. So overall, we just, we just looked for, you know, a lot of the easy wins. You know, when I initially started off a lot of this research, I kind of had this dream of like, oh, I'll steal everything under the sun and then just deal with it later. And you'll kind of like go through it. But it just turned out that kind of grepping for the common stuff you'd think would be good. It was, was a good approach. You know, just, this just kind of goes to show like we had a lot of success there. So yeah, just a lot of great stuff. And, you know, sad I couldn't like do a lot more poking with what it actually was, you know, when you do find something. But it was still really, really cool to find all this stuff. So basically, I want to talk a little bit about how I did find all of this. The, the vulnerability and the misconfiguration on the surface is pretty easy to exploit. And, you know, my, you know, that client in back in January, you know, I was doing all of that manually. And you can definitely do that. It's, it's totally possible. But I basically wanted to kind of automate that process a little bit because when you talk about temporary credentials and things like that, it is a bit tricky to deal with those manually. Like if you, if you have some set of temporary credentials, you know, those are exposed for a good window of, you know, 24 hours. And if you get those credentials, you can basically endlessly refresh credentials. I wasn't, you know, I didn't really like, you know, look at this too hard. But it's definitely a known technique. Like if you get those temporary I am credentials, you can just refresh them endlessly until they basically rotate them out from underneath you. So it was really important to have some automation under your tool belt. You can exploit some instances manually, but a lot of the ones you're going to find are just unlabeled and difficult to detect. So you definitely want to have some automation there. And there's basically a really simple three step exploit process here. You know, you're just going to pick a snapshot. You're going to attach that snapshot to your EC2 instance. And then you're going to search it for secrets. But the problem is there's about 120,000 disks that are exposed across all regions. And a lot of them are just, you're just not sure what they're going to be because they're not really labeled. And a lot of them are just garbage. They're just totally legitimate disks. So, you know, each of these steps has some nuances that are, you know, kind of tricky. So the first step, oh, clicker. Oh, sorry. Clicker was malfunctioning. So at high level, this is kind of the architecture I used. There's nothing really crazy here. I'm basically just using an asynchronous queue to send new snapshots to workers. And then that master in the middle is just a little application that kind of coordinates everything. So that master, he just pulls all the snapshots about every minute, looking for new ones. When a new one is detected, it just puts it in the queue. And then the worker in each region that, you know, the message is destined for. We'll just pick up that message and start the scraping process, which, you know, extracts all the secrets. And it just throws it into the database. So it's a pretty simple process. And this asynchronous queue and worker setup gives us the ability to scale up and scale down the worker processes as we need. So we can save a little bit of money, too. Which is always nice. So, and all of the code and all of the scanning is within the default AWS API limits. I didn't have to, like, ask for, you know, well, I need to scan lots of disks. They let you do, like, five disks concurrently across everything. So, you know, within those limits, within a default AWS account limit, I was able to scan these disks. So there wasn't anything, you know, anything preventing me from doing this, basically. So with the architecture out of the way, step one, pick an exposed snapshot. So what to read. And there's kind of two ways you can go about this. You can do an exhaustive brute force over all of the disks. And that's totally possible. You can spend, you know, a couple months doing that. Or you can kind of do a more careful approach. And I initially started off doing the brute force approach, like I said, I just wanted to steal everything out of the sun, you know. But that just didn't really work out. For mostly a couple, for basically, like three reasons. I was just fishing up a lot of garbage. So the human genome project is on AWS and there's like, you know, genome sequencing happening. So you get like 20 genome sequencing disks in a row and you're just like, man, I really want some AWS creds. All I'm getting is a ton of garbage. And there's no faster way to, you know, just thrash your database than just filling it with worthless files from that. So there's that. And then each disk also takes about two to five minutes to scan. So at a minimum, just the logistics of cloning the disk, mounting it to your image and then detaching it and force detaching it. Takes about two to five minutes. So the more disks you have to scan exponentially, you know, more time you kind of spend, not exponentially, but just a lot more time you spend doing it. And also it just costs money. So, you know, who likes spending the profits, right? So basically I kind of came up with a way to filter out these disks. If each disk has an owner ID and that owner ID basically just tells you who made the disk. So I looked at the owner IDs, I just counted them and I looked at the frequency that owners would publish disks. And what I found was there were a couple of outliers that published about, you know, like 50 or 60 percent of the disks. There are about four to five of them. One of them is Amazon themselves. So these disks were basically just kind of worthless. There are just deployments of, like, GitHub Enterprise and just a bunch of stuff you don't care about. So I took those and I figured the smaller owner IDs would have a better chance to kind of reel in those credentials and get them going. So using that owner ID I was able to cut down the number of disks I had to scan to about 20,000 and that just made the whole process much faster and I was able to finish it in time. So the next step is attaching the volume and there's this like nice AWS butterfly effect that happens where you end up wasting a lot of money because a tiny bug in your code ended up, like, breaking everything. And there's some really interesting failure points that I didn't realize exist. Like I didn't know the metadata URL could fail. And, you know, one day it failed, it crashed my Python script and I had to, you know, just manually, you know, kill those disks, those zombie disks that were laying around. But it kind of made me think about, well, if I'm testing for SSRF and I have a scanning, you know, some scanner and it throws the metadata URL at the target and that metadata URL had just happens to be broken during that time period. You know, I just got a false negative and it kind of made me rethink some of the AWS testing that I do myself, just in simple cases like that. And then also you're going to have like a ton of file system issues like LVM disks that just, you know, like for some reason it just needs a totally separate tool to unmount and mount. I don't know why but, you know, that's just the way it is. So you're going to run into a lot of those issues and you kind of just want to make sure that you're, you know, taking care of them. And then searching the disk for secrets. So one thing you can do is you can use something like DLP Diggity or like Git Rob or Truffle Hog to kind of go through like specific things. And that's pretty much what I did. I just, I stole the greps from Truffle Hog. Thank you. And then I kind of came up with some of my own to just look for the private keys and whatnot. So this process was pretty straightforward. Just mostly a grepping for like really high signal stuff. And also I did sniff the MIME type for each file. I would attempt to kind of only scan files that didn't look like, you know, binary or images or whatnot. So this kind of cut down on the number of files I had to scan for each disk and led to, you know, faster, you know, faster scanning. So, and another interesting thing I did with this as well is because you have access to all the default disks on AWS, I just spun up all of the default disks and made like a huge blacklist of every file that you don't want. And then I manually like added some for Etsy Shadow. We always want to steal shadow files. So we kind of go through this white listing and blacklisting process. And all of this, you know, ended up kind of coming together to make it pretty quick to scan these disks. You know, each disk can kind of go down in about like seven minutes. So that's pretty good for, you know, for the purposes of my research and it ended up working pretty good. And yeah, so we just end up grepping through everything. And, you know, just just kind of like some lessons learned. Like have tests for your code. The AWS butterfly effect is definitely real. And it's going to return things. The AWS APIs will return errors you definitely don't expect like the metadata you were all failing. And also you definitely want to design for like multi region upfront. I made some design decisions that ended up kind of, you know, messing that up a bit because I didn't realize snapshot IDs are actually not unique across regions. So you could have two regions, two different snapshots one snapshot ID. And, you know, if the primary key in your database is that ID, you're kind of going to have a bad time. So yeah, just make sure you think about all of these things up front before you start, you know, kind of looking at it. So we've kind of talked a little bit about, you know, what this is, how to find it. And we can talk about fixing this problem. So remediation, what does remediation look like? In this case, it is pretty easy. But there's a couple of things that to keep in mind. You definitely want to go and search for your three AWS accounts to find any public disks that are unencrypted. If you do find one that is sense has sensitive information in it and is meant to be is not meant to be public. You want to take down the snapshot, first of all, that should be obvious to everyone. But then you also want to rotate your credentials. And I know a lot of people like to skip this step because it's a pain. It's hard. Oh, there's that one system, you know, that someone built five years ago. We can't rotate the credits. I'm sorry, you know. But you should definitely rotate the credentials because if you think about the vulnerability, I'm scanning every single disk every minute basically. And as soon as you make that snapshot public, I'm going to initiate a copy on it. And once that copy is initiated, it takes about two to three minutes to actually finish the copying process. And from that point, the copy is mine. If that copy finishes, you can't do anything about it. And you're not even going to know, honestly. And that's one cool thing I thought that was interesting about this attack surface is unlike a lot of other attacks with AWS, if you're like, you know, sending a lot of web requests to try brute force directories or something like that. You're going to, you know, show up in logs. But in this case, you're not actually directly attacking a customer on AWS. So you're actually not going to log anything. And it's going to be really difficult for someone to actually detect that you're cloning their disks. There's basically no way to do it. So at least that I could find, you know, in my in my research, there might be some some, you know, logging set up somewhere, but I couldn't find it. So it's kind of a sneaky attack, you know, you're gonna you're gonna have your cred stolen. And you're going to not really even understand, you know, how it happened, because there's no logs. So if I'm stealing your disks every minute, you're going to want to make sure even if it's only a brief exposure of the disks, you're going to want to make sure you actually rotate those credentials because anyone performing this attack is almost certainly going to have a setup similar to mine where they're just kind of pulling it down all the time. And and and not letting you, you know, kind of get away with without rotating those credentials. So and then the last step is, of course, to have a little post mortem, you want to make sure that you understand how this vulnerability happened in the first place. You definitely don't want to just let your developers, you know, just kind of go and and just kind of have free reign. Is this an SDLC problem? Or is this like a random script that we have that we occasionally use and just happens to create public snapshots? You just don't really know that. So definitely check it out and investigate how you got there. And you're going to want to, you know, go through this process. You can go through it manually. I'm going to delay the release of the tool just by a couple of weeks to coordinate with Amazon and get some of these disks, you know, taken down and give people an opportunity to go through their own disks and just make sure there's no exposures. But after after a couple of weeks, I'm going to release Duffelbag and you can just go download that and that will help you scan disks. And I've written it in such a way so that it will work on kind of any disk you have if it's private or public. So it can help in your own organization scan for disks, you know, because one thing that I find a lot is you'll run a scout suite report and then you'll get a bunch of EBS volumes that are unencrypted and sometimes there's like a thousand of them. And you're like, how do I go through all of these and figure out if any of these are actually if anyone should care about them. So this tool should work with public and private disks as well as like a wide variety of file systems. So you should be able to use this to kind of help sift through that pile of hard drives you do need to go through in your own organization. So definitely check out that in a couple of weeks that will also help remediate it. And we also just kind of wrapping up here some more loot that I thought was pretty funny. So I always like to look at my favorite password. You know, I was like when I crack passwords, I always like to you know, pull out my favorites. So I found like Nuglover's password and that was for like a Docker account. So thank you, Nuglover. And if that's your password, you should probably change it. Yeah, and another thing I was looking at some disks and I found some creds on there. I was super excited. I was going through everything and super, super jazzy as creds. And then I realized, oh my God, this is a capture the flag box. I just captured the flag on a disk. I didn't even, on a CTF, I didn't even know I was playing, you know, it's just, it's like, yeah, it was, it was, it was pretty good. Thank you, thank you. Yeah. So, so that was another great find. And one other thing I found on there was wallet dot bats. And if you don't know what that is, that's your private key for your Bitcoin wallet. And I found something like Zcash wallets too and everything. So basically when I found those, I mean, my heart rate went up so much. You have no idea. I thought I was about to be crypto rich and, you know, like go live on my island. Yeah, it didn't turn out that way. I'm here and not on an island. So you can definitely know that I didn't really get any money from that, but it was mostly just people toying with cryptocurrency, which is great. But, you know, it didn't get crypto rich, unfortunately, with this bug. But that would have been nice. So, you know, maybe in the future, maybe someone will throw their wallet dot that up there with a couple hundred bitcoins, whatever. And another thing I found a lot of was SSH keys on Windows machines. And I was really curious about this because I thought a lot of people using AWS and whatnot would be just using Linux or some other disk. And it just turned out there was a lot of Windows disks that I wasn't expecting. And I had to go and make better blacklist and whatnot. But like every time I get an ID RSA off of a Windows box, I just kind of smile a little bit because, you know, it's kind of like, this cool. So there was like a surprising number of Windows machines out there. And a lot of them were misconfigured in this manner to, you know, to facilitate all the secret exposure. So lots of Windows disks, just some really cool stuff there. And yeah, and, you know, I'm sure that there's a lot more out there. You know, I was pretty time boxed on this research because, you know, it started in January. And then I kind of put it off and, you know, you know, just said, oh, well, you know, no one, no one is, you know, really going to worry about this. But then I realized like how widespread this is and wanted to start looking at it more and more in detail. So, you know, there's still a lot out there. I was only able to like look at text files and whatnot. There still could be like database files and lots of other interesting information. And I also only, you know, I kind of limited myself to disks that were under about 100 gigabytes. So there's still like more attack surface. And the cool thing is new disks pop up every day. There's about five or ten disks that pop up every single day. So it's like every day you get a nice little chance at like a present, you know, like a treasure hunt every single day, which is just fun, you know. You get those emails back, you know, like, oh, found some creds. So just kind of some conclusions here. I manually validated about 50, you know, sets of credentials. And then after that I was like, all right, dude, I can't do this anymore. Like my eyes are bleeding, you know, from like grepping out creds and testing them on like a million different things. So I would just kind of give them to Amazon and let them deal with it. I kind of estimate there's about 750 to 1200 50, you know, high and critical exposures across all the regions. And this is just kind of a direct extrapolation from my, you know, my regions and the disks I was able to look at in this time. And there wasn't really a pattern for, you know, who was impacted. It was just kind of like random software, government contractors, healthcare, everyone, everyone just kind of random. So not really a whole lot of patterns there. And overall it would cost about $300 plus R and D time. So it's a very cost effective attack, which is not my assumption. It would not, it wouldn't not have been my assumption and it wasn't my assumption when I first started this because I thought, you know, spinning up all these hard disks would actually be pretty expensive. But it turns out if you destroy them pretty quick after you've scanned them, it just doesn't cost that much. So it's a very cost effective attack. And because you can kind of do it passively, you know, over time and when a new snapshot pops up, you just go scan it really quick and turn it off. It's pretty cheap to do, which I thought was great. You know, getting a set of root keys for $300, it seems like a pretty good deal, you know, along with all the other stuff. So, you know, the research and development time of course. And yeah, and that was basically, you know, just a really cool aspect of this that I really liked. Just super cheap pre-creds. What could go wrong? Yeah, so that's pretty much it. You know, I just want to thank you guys again for coming. I want to thank these people as well. And thank you so much for coming here today. That's pretty much it. So, yeah. Yeah, enjoy the rest of your DEF CON guys. I'm going to hit the bar and I'll probably field some questions outside if you guys want to chat with me. I would love to talk about this stuff and any ideas you guys got. So yeah, have a good one. Enjoy. Thank you. Thank you.