 My name is Jason Haddix and this is domain discovery expanding your scope like a boss. So we did some strike through here because it's not just domain discovery. I added a little bit of web discovery here. This talk is primarily focused around my discovery methodology. By no means think I have the best discovery methodology, but I think it gets me a lot of sites to hunt and bug bounties and pen tests when I'm going after a certain entity or a business or something like that. That's my Twitter if you want to yell at me on Twitter. I work for Bug Crowd who does bug bounties, so we manage enterprise bug bounties. On the global leaderboard of 600,000 registered testers I'm 59th currently on Bug Crowd and I play a lot of video games. That's me and my son and we're both wearing matching dinosaur shirts if you can see that. Okay so this is my methodology in a nutshell and if you guys might have seen this a couple of weeks ago I did release a version of this but I've added more since I did that version. So my version of a methodology to do discovery for hacking websites, this isn't an oscent talk where I'm going to go into hacking people and individual information. This is a methodology to find more web assets for me to go after in a bug bounty or pen test. Usually when you find undiscovered assets, web assets, they are less secured than the main domain and so in bug bounty world that means that they will be less secured and I can find better, more critical bounties on these sites. So we start with identifying our targets, their main TLDs. We move on to domain scraping for those discovered TLDs. We go into domain brute forcing. We then we go into permutation scanning and port scanning. We do some visual identification. We do some auxiliary stuff and then we go into the sites that we have now discovered and we start doing platform identification, content discovery and parameter discovery. So this is kind of how it works. So the first thing you do when you start trying to find or discover assets or IPs or anything associated to a domain and it's hard to see this but you actually have to look up their ASN, the autonomous system number for a company. This is their kind of registered IP space. They're all assigned an ASN number. Here I've looked up Tesla so Tesla has a couple different blocks here. So the site to use this, site to do this is that I used to do this is Hurricane Electric. It's one of the only free sites that will let you pull down the large set of ASN data. Some of them will let you pull it down but only will limit you at like 500 requests or 500 lines of the output. So here I can see that Tesla Motors has a registered net block of 209-133-790-slash-24. So this will be one way I start gathering or basically searching for their IP space so I can start looking for assets. This is just one out of all. Usually when I start though, I'm starting from a bug bounty so I already have a couple targets to start with. Usually they're main domains. So for Tesla we're using them as an example because they have an open bug bounty and it's pretty permissive. They say anything that we own is in scope of the bug bounty, including the cars and the web assets and stuff like that. So I would already probably have Tesla.com or TeslaMotors.com in scope and then I would look up their ASN. So then you want to do reverse who is. So manually you can use these two sites. There's also some tools that automate this. So one of the frameworks I'm going to use a lot, I'm going to rely a lot in some of the next slides is called ReconNG and they have a module for reverse who is. But if you want to do it manually, these are the two, again, the two sites that you want to use because they don't cap you at the amount of stuff that they pull back. Again, all these sites that do this online kind of reverse DNS information or give you information about domains and stuff like that, they want to charge you at some point for some of this info. And as a pen tester, I'm not going to pay $70 every lookup I need to do. So reverse who is will find some more scope targets for this type of stuff. The last one in this area is acquisitions. So this is actually the end all be all site for acquisitions. Crunchbase marks all this stuff when a company acquires a new company. They mark it all in their acquisitions table. So you can just go up to the URL at Crunchbase. It's crunchbase.com organization, then your organization name. And you can see here I have a list or a history of everything that Tesla has acquired in the last 10 years. So depending on if your pen test is really open scope or your bug bounty is really open scope, these domains are now top level domains that I have to put back into the beginning of my discovery. So Grumman Engineering, SolarCity, Riviera Tool. I'm sure if I was doing this, I would also want to check out SpaceX just because I know that they're related to Tesla. So you also want to add these top level domains to your testing. This should be automated. Some of these sites should be automated, but these sites really want you to pay for this data. So they're all protected by different bot protection tools. So this one's protected by Distill, which is actually pretty good. Using a web scraper doesn't work like all the normal web scraping libraries that you would because otherwise I would automate this as part of a tool or something like that. Okay, so that is gathering from OSINT sites. What I want to do now in this section of the methodology is move on to finding subdomains. So I found a whole bunch of top level domains. I found IP addresses that they've used. Now what I want to do is find probably subdomains, and there's a couple different ways to do this, but they involve scraping more of these open source sites. So what do we do in this case? Well, basically when you're doing subdomain scraping and you're trying to find more subdomains of a top level target, a TLD, you want to scrape search engines, right? So the idea here is you would go to Google.com and search site colon Tesla.com and you would get a whole bunch of results that come back and that says Tesla.com, right? And then also a search result that comes back is admin.tesla.com, forum.tesla.com, and iteratively you make more Google searches removing those ones that you found from the list until you have no more results and you have a full cache of everything Google's ever seen for Tesla.com, all the subdomains. That manually is kind of a hard process. So you use tooling to do this. The other place that you can look for subdomain data is inside of certificate transparency projects. So cert.sh, Google's certificate transparency project and their open SSL subject alternative name space. Now these are kind of individual talks. You can go into scraping subdomains out of certificates or you can go into search engines scraping. I think there's already been a couple talks on this. So I'm going to go into the automation of these and what I use to make this quick. So there's two tools for doing all this subdomain scraping and finding targets. One is ReconNG, which I have created a script around called enumall that wraps around it. It's a Python wrapper. And there's another one called Sublister. And they're both really good. They both do the same thing. These are the sources they pull from. It's kind of hard to see. But on the right, Sublister does, they do the same thing on Bing. They look in the cert.sh project, which is the SSL certificate's directory. They look in the Threat Crowd API and the NetCraft API. Now individually, they also do some different sources. Sublister handles Bydo, Ask, DNS and Dumpster, which is the scans.io project, VirusTotal and PT Archive. In some way or another, these sites all have a website that's aggregating domain information somehow. And these tools will scrape that out of there for your target. So they will find Tesla subdomains for me. ReconNG individually will do SSL tools, API, HackerTarget API and Shodan, which is a pretty popular one. And then they have a whole bunch of optional modules that you can bake into ReconNG. So disparate tools, but the two best tools. So if I want to scrape all of these, I don't really want to have to use both. So this is a tool called Sublister. And so, oh, this is the one I showed you, the Sublister, and so this is a run on the right. It's hard to see, but basically I've set it to go after tesla.com. It runs, it searches Bydo, Yahoo, Google being, NetCraft, DNS, Dumpster, VirusTotal, ThreatCloud, SSL certificates, and passesdns.com. And then it just gives me an output of all of the domains. You can kind of see them. You can see tesla.com, auth.tesla.com, autodiscover, dev.tesla.com, forums.tesla.com, et cetera. So I've significantly expanded my scope now from just tesla.com to go after. So the other tool was enumall that I wrote. And I don't really feel like running them all independently. I want some form of automation around this. So this is a project called BruteSubs, which basically can take any recon tool written in any language if you make a docker file for it and spin it up and run it and give you the output from such tools. So BruteSubs specifically takes my tool enumall and sublister and a couple other tools we're talking about, and we'll run them. There's some configuration required. You have to set up the docker image with some of the modules that aren't included by default, make a custom environment file. But once you do that, all you do is docker up and it will run all of this domain scraping for you and just give you an output text file from three or four different tools that we're going to talk about in the next couple of seconds. So this is a run on the left. You can see it does a little bit of everything. It does the scraping. One of the tools is actually a BruteForcer. We're not going to use it because we're going to use something else for BruteForcing. All right, some other subdomain finding tools that are scraping data. The one that I find really cool here is Cloudflare. So Cloudflare basically, when you log into the Cloudflare site and you go to add Cloudflare to a domain, you know, potentially that you own, you type in here Disney.com, right? And it'll tell you if that's already used, basically, if Cloudflare is already running on that domain. So this isn't my screenshot, by the way. This is the tool author screenshot. So by iteratively putting in names into that search field on Cloudflare once you're logged in, you can verify or not verify that they have a subdomain in existence with Cloudflare. Cloudflare runs on 15% of the internet right now. So you can get a massive amount of, basically, OSINT information from this kind of API or web portal that they have. So here it's running against Disney. You can see that it's returned some success querying the DNS archives of Cloudflare. There's another one that's individually TestCensus.io, and that's another project for aggregation of web data. So these are bespoke tools that don't really fit into anything, but I really like this Cloudflare one. Okay, so we scraped a whole bunch of stuff off the internet search engines, a whole bunch of open source sites. Now we have to move on to basically guessing at our target, right? So I might have some search engine results, but now I want to do this idea of brute forcing the subdomain. So the classic example of this is that you have Tesla.com, and then you try to resolve admin.tesla.com, and if it resolves and you go somewhere, then that site exists. If it resolves, it doesn't exist, and you iterate over this in a tool. Now over the years of pen testing, we've done multiple tools that have come out for this. Fierce was probably the first and best one. New school ones are like sub-brute, black sheep wall, DNS parallel brute force. Now the problem with these is that they take a long time with the big list that you're trying to brute force, right? And different projects have come out with different lists as well, and some of those lists are really long. And to do that normally, or at least in my experience, it took days, two weeks to run through a brute force of that kind of stuff. Now this is actually some research that I did. I basically benchmarked all these tools. So you can see that the two best ones here are two tools called GoBuster and MastyNS. GoBuster is written in Go, obviously. It does subdomain brute forcing. With this list, this one million line subdomain brute forcing list, it completed that whole thing in 21 minutes. MastyNS finished it in a minute 24 seconds. Now the reason MastyNS is so fast is because it's written in C, and instead of using just your DNS infrastructure that you're connected to, it has a list of DNS servers that it cycles through in parallel to resolve the sites. It gives you more false positives, but it runs very, very quickly. So these are the two tools that I integrate into my methodology for subdomain brute forcing. Yeah? Yeah. No. Well, subroot obviously aired out, but DNS parallel program BlacksheepWall returned 61 and 43, respectively, but they were all included in the data sets of GoBuster and MastyNS. Yeah. Okay, so that file that I said was a million lines that you're doing for subdomain brute forcing. So that is a file that is made out of basically every file that I could find that has ever done this type of work, subdomain brute forcing. So the fierce list is in here, DNS scan, the deep magic top 500 prefixes for subdomains. There was some research earlier this year by a guy named Bitcork who did scraping on the web of the top million most popular subdomains. Remember, we're brute forcing just names to see if they resolve, but we're not brute force from. So this is everything that's ever existed as far as I could find. So it's all catted into one file for you to use at that gist. You can just grab it. And the idea here is because MastyNS is so fast that why not just use a huge list? It doesn't matter, right? It gives us better coverage. So MastyNS can complete this in, you know, minute 30. Yeah, there's even some extra stuff on the subdomain list. So like the raft list and these were URL brute forcing, so using words to find URL paths. And I found that to be sometimes useful in fuzzing subdomains as well or brooding subdomains. So those are incorporated into this list as well. Daniel's project, robots disallowed. I ported to a DNS structure to being here as well. Okay, so I've done a ton of brute forcing and I've found some stuff that resolves. I've seen the internet and I've found a whole bunch of stuff that might exist because it was off search engines or in all these open source caches. Now what else do I do? Well, a quick part of the methodology is trying to find permutations on the stuff that I've found already. So the idea is that I may have brute force for admin.tesla.com but a lot of people use subdomain nomenclature that's like acs.admin.tesla.com, right? So what all DNS does is it takes permutations and throws them into your results of other tools. So you take the results of other tools and you go ahead and run it against the domain and it will add these permutations all over and then find additional subdomains against stuff that you've already found because they use these weird naming structures in a lot of places. The other one, this is kind of interesting. I've only used this once in success but I really like the idea behind this because it's called SDBF it's called Smart DNS Brute Forcer. The approach here is to use Markov chains in some statistics in an Ngram model to basically generate these permutations of domains. I actually don't know what all that means. I've just used this tool and it's kind of cool. So yeah, so the white paper, this was an academic research project. He's published the code for this. The variance between this and Alt DNS is actually very low, maybe like 10% more I found with this than Alt DNS but I've only run it a couple times on a couple projects. But I like the idea and it has fancy names. Yes, with 100% covered so far, yeah. Okay, so I've done acquisitions, I've done their ASN, I've done subdomain scraping from all kinds of public resources, I've done permutation scanning to find bespoke weirdly named subdomains. Now I need to move on to doing some sort of actual port scanning. So I have a whole bunch of targets. There's really no other solution better than Mascan. It's just what you do. Nmap will take forever with a large ASN, not to mention the amount of time it would take to add on all those domains I found. So you just can't use it. Forever. So for a large targets ASN like 65,000 hosts live or something like that, the run from Nmap is kind of infinite and you fall asleep. For Mascan you can do a scan like this in 11 minutes. And so the problem with Mascan is that by default it doesn't have a default ports list, it's kind of meant for projects where you scan the whole internet for one port. So you'd have to specify in the command line all the ports you want. These are all the ports that are the default ports for Mascan. So if you want to do that, you can do a scan like this and just plug it in and go. Or you can use a config file and put all these ports in your config file for Mascan and it will go out and port scan all this stuff. So it's written in C, it's distributed really fast and so that's why it can do what Nmap can't. It doesn't have any of the functions that Nmap has for server, versioning, for Nmap scripting engine. Oh. I think it's just a list. List in the command line, yeah. Oh, actually no, it supports OG, which is the Nmap syntax for XML, comma, separated value and something else too. List as well. Greppable, there we go. Greppable, yeah. Cool. This tool will melt your boxes. So make sure to do it on DigitalOcean and not your home network. It will dosh your router. So I have port scan a lot of stuff. I have scraped a lot of stuff. I have a lot of sources that I'm building from. But in the first part of the methodology we were scraping public sources and those things might not be on the internet anymore. They might have been taken off. They might actually have a registration but they're redirecting to the home page of the site I'm after. So a lot of the, when I did this run, when I did Tesla, a lot of the sites was TeslaMotors.com because that's their main domain. They had registered something either so that some nerd wouldn't hijack a domain that sounds like them or they had maybe planned to use a site eventually but never did or something like that. So there's a lot of registration and redirects and stuff like that. So you need to do some visual identification but I'm not going to go in my browser and open up like 400 tabs of sites that's inefficient. So what you do is you use a tool called Eyewitness. I like Eyewitness the best. What they'll do is they'll take your list of output and since I've been scraping from search engines sometimes I don't get the protocol that I'm supposed to visit for that site so it might be HTTPS it might be HTTPS depending on the source where I'm scraping from. Now what Eyewitness will do is it'll try to visit a subdomain or a domain in both HTTPS and HTTPS and it'll take a screenshot and it'll dump it in a folder and then it'll sort them by a screenshot. They'll also pull header information and content type and sort them into folders for that as well now too. So what I do is I run this on the output of everything I've already done and I start visually inspecting and so you can see some thumbnails there for kind of what the screenshots look like but eventually what you'll see in a large engagement like this against an open scope target is like an employee login and that's where kind of I would want to start or a partner login or maybe an old marketing page or a page that looks like it's coded in a language that you could probably kick and it would fall over so that's kind of where you would want to start. And this is a run of Eyewitness on the left. The added benefit of Eyewitness is it has a library to take screenshots for a couple other protocols too not just HTTP and HTTPS it'll also take screenshots of RDP and VNC so if these sites have those ports open it will take a screenshot of the RDP login so I've actually scored pretty good money on sites that have left RDP exposed to the internet with admins still logged in so I had their user names and all I need to do is brute-force their passwords to get into their RDP instance. VNC. One second. Any questions? Not in my methodology usually I'm going after web targets but the mass scan output will give you all those ports so if I want to I can go back later and look at SSH, FTP whatever protocol I want public facing databases like nobody should have that I want to brute-force those I don't know other interesting ports Cool, some auxiliary stuff for enumeration here is if people have DNSSEC enabled you can use a couple of DNSSEC tools in the LDNS Utils package to walk the relationship of DNSSEC registered domains and this has to do with the presentation that was at a conference that I held called Level Up at Bug Crowd we invited a bunch of our bug hunters to come and talk about their techniques he did a whole talk on this idea of LDNS walking and NSSEC walking and then even NSSEC 3 walking which are all technologies related DNS SEC and so the presentation there taught me a lot about these couple methods there was another one for basically using github to find secret keys internal credentials, api endpoints and domain patterns inside of github so just searching your target inside of github really is all it is you go to the search box you say tesla.com any partner that's ever integrated with them that has open source code you get their domains they might have left api keys so there's also just like the simple fact that people also don't actually just visit the site and see where it takes you so this isn't passive but this is active so one method is to use just burpsuite and scope filters to find additional domain coverage so I'm going to show that in a second but basically you load up your main target through a interception proxy burpsuite for web testers you visit it and then the JavaScript will start firing you walk it a little bit and you'll start to you know your site tree on the left will start to build up and then you can just nuke your scope down to a keyword of like tesla.com and start looking at additional sites that they were linking to and then you can also do a ton of Google dorking for things like people's ad keys for Google so if a company registers an ad key for Google they're going to use it globally across a lot of domains so if I want to find all of their sites I might want to Google for their ads key also companies across all of their sites use the same privacy policy in terms of service so if I can find that I can Google for that string and find domains using that and then looking for people's s3 buckets and aws buckets that's in a presentation by Ben Sakhtehidepour I think it's how you say his last name he was an intern of mine at bug crowd he works at hacker one now and he did a presentation on that which is linked on the right as well those are methods that take a while and could have their own talk so I don't go into all of them right here so this is just like a simple idea of what you do with Burp so I'm going to kind of try to do a live demo I'm going to fire up Burp suite so this is using Burp to kind of find all of the domains you can that are linked on one site so I'll start at Burp any of you in here use Burp yeah it just sits between your traffic and the server and it shows you everything that goes by and it has a whole bunch of helper tools to manipulate that web traffic they have a scanner in it but today we're not going to focus on any of that so I've opened up Burp suite and I have a Chrome tab here that's directed to pass through Burp suite so if I refresh Tesla.com you'll see my proxy tab lights up I'll let this through so all the traffic is now flowing through Burp suite to the internet you can see on the left hand side a whole bunch of stuff has how many domains have traveled through now I want to make sure that I completely I completely spider Tesla so I can find every related domain so here's Tesla.com down here I can right click here and spider this host it'll say would you like to modify the scope to include these items you want to answer yes here and I'll say yes here alright so this is going to spider Tesla.com with the web spider you can see it's transferring bytes it's making requests it's got some forms queued I'm going to stop it here because this could take a long time and now I've got a whole bunch of stuff in this site tree on the side right and these this will expand out pretty large if I continue that spider to its finish not all these are Tesla sites right these are all links that were on their sites so what I want to do for your target is just to go to your scope tab here and you can add a new rule here in your scope and just add a keyword under your host here as Tesla and then click here on the ribbon and say show only in scope sites now these are all Tesla sites that were linked from their main domain now what I'd want to do is select all of these spider all of these recursively find out what they're linking to and use the same filter to try to find more domains so recursively I could do this and probably find a ton of their stuff that they're linking to on their sites any questions there cool alright that was the only one I didn't have an animated gift for so alright alright so I've identified a ton of sites to hack now on a large scope program or pentest now I want to do some actual identification of what software they're running to actually do the hacking alright I found a ton of scope so there are some tools that already exist to tell you what technology stacks that these sites are using WAP-a-Lizer and built with are two browser extensions that you can use that will automatically just based on the headers that come back from the site based on strings in the source code of the page the HTML source code they just know it's built with WordPress or they know it's built with whatever framework they can even go as deep as telling you what the databases are that are running the project so WAP-a-Lizer and built with they're just little buttons that sit inside of your browser and you can click them and get full stack information the reason I want to know the full stack information is because it helps me know what techniques I'm going to use to try to hack that server there's also another thing called Retire.js which is a scanner to find outdated server-side JavaScript libraries which is super cool you can use that I use it a lot and then this is a new tool called Vulner's or it's not a new site but it's a new Burp extension that I like so Vulner's.com is a repository basically CVEs and instead of the actual CVE advisory they give you exploitation data on how to exploit old versions of software like you know old hacker sites used to do so they give you all the write ups of the original person who exploited it it's not just like hey there might there is a buffer overflow in this area of the application this is the CVE number so what Burp Vulner scanner will do is it will find basically pages you load it into Burp you browse a site and it will tell you what technologies the site is using and what CVEs based on the version number it's returning it might be vulnerable to and it has complete links inside of Burp in the bottom right hand corner there that you can go to and click and find exploitation info so it's a pretty sweet Burp sweet tool that I like a lot right now it's plug-in yes Burp Suite plug-in yeah I think two weeks ago yeah it's not in the backstory yet at least it's not as far as I'm aware so it's on the github on Vulner's Vulner's.com Burp Vulner scanner that's where that's where it is so this is just loading the extension and running it on something eventually in the bottom right hand corner you'll see the you'll see the outputs writes going to a site selecting a site yes he's Vulner's detected and there's all the CVEs that might be associated to that site because of its versions on the bottom right hand corner cool all right so I now have a bunch of targets I've started to identify their platforms I have auxiliary reports to test I really have a lot to test at this point on a large scope and tester bug bounty now what you have to do in any like web test if it's a web technologies you have to discover all the content of the website not just by spidering stuff that exists but trying to find stuff that doesn't exist so this is this is a thing called content discovery or directory brute forcing on websites you'll learn about this if you take any web hacking class it's basically brute forcing paths on your url so let's say I'm hacking Tesla.com and there's a there's a hidden page that is basically the admins page in its slash admin but that's never linked anywhere on the site only employees know about that path to go there well the way you find this stuff is you brute force words after that path after the main Tesla.com path here this just like the old tools in DNS used to take a ton of time to do tools like potador or or any other directory brute forcing or content discovery tool even the content discovery tool in burp not super fast go buster is kind of the new school tool to do this it's written and go it's multi-threaded it really does a good job for this so you can burn through a large list of brute forcing this kind of stuff with go buster in I can tell you it's completed in running sorry it is yes it has some of the same functionalities as the cooler tools where you can so it took 40 seconds to run on a 500 500,000 line directory brute force list which is really fast if you don't know that's super fast for brute forcing the passwords I think madusa was the one I used last I think madusa yeah madusa anyway okay so you're brute forcing URLs right so you have a fast tool go buster but you also need a great list to find out what common paths there are so there's there's three lists that are really kind of the industry standard now sec list is a project that Daniel and I run together and it has lists of paths that usually exist that are sensitive Dan did a separate project called robots disallowed he went out to the whole internet right Dan and basically found every robots.txt file that existed which is the stuff that admins don't want you to spider and then put that into a list so you can then spider it yeah so it's like the whole internet status so it's super cool you run that through go buster it's really fast you'll find good stuff there these digger word lists these digger word lists are two projects put out by I forget what consultancy it was it used to be stack and loo which is now bishop fox I think yeah so they're cool guys they had these digger tools and what they did is they went to the source code repositories github and there's another one I don't know what the other one was what might have been bit bucket I'm not 100% sure anyway they went there and they spidered all the code from those projects and then parsed out the paths any paths that they saw for directories and then made lists based off of that and reoccurring instances so you can use any three of those lists but use them with go buster you can also do this inside of burp but it's just not as fast but if you want to keep it all in burp you can do it in burp alright so now I have paths I have sites I've got a ton of stuff to work with is there anything else I need to do there can be I don't do this a lot because this is like an extra step but it is worth mentioning I think it's a cool idea so the idea here is that I might have found a whole bunch of scripts and maybe I found an admin script or resource on the page but this resource doesn't tell me how to use it I'm not an admin I don't know what parameters to send or data to pass into this script to do anything so the idea here is you can actually brute force that as well so this is a tool called parameth it's a resource name or a script name here on the bottom it found a simple test PHP script but it has no idea what parameters to pass this to execute anything so or it found you could find a path or something like that so what it does is it brute forces parameter names for that there's another project that burp actually put out called backslash powered scanner and they did a similar research to Dan's which but instead of crawling all of the robots.txt files on the internet and putting that into a list they basically I don't know where they got the data but they pulled out the top 2500 used parameter names on the internet so you can load that into parameth and try to find parameters that will execute you know when you're successful because this tool will do like a reject response discovery and a comparison between sending it nothing and then sending it this this brute force parameter so it'll tell you hey this differed you should go check this out because I think I actually executed something here yeah no that's not it's a different thing yeah no this is a command line tool yeah I you could do it in burp you would just have to do it in intruder yeah it would I mean command line tools are usually faster but if you do it in burp you keep it inside of your burp workflow which is nice yeah okay so parameter brute forcing is a thing all right so there's a lot of stuff that we've done in here as is you know like how do you keep track of it how do you automate it like really there's not a great solutions right now the problem here is I see a lot of people making these frameworks to do all this for you and that's really great like jcran at my work has one called intrigue and it is these slickest looking oscent tool there's one called data split which is badass which these guys actually make which is cool but my problem is that the tools and the sites that I'm scraping and that are written right now they change a lot like those sometimes those sites go offline sometimes they start limiting your api access the tool and the technology maybe maybe distributed next time it's written a lot of these tools are writing their own tools to do this and so it's never as fast or as complete as I need it to be so I always end up having to go back to manually using these tools to get the data I want so there are two projects one called hodor and one called kubot by on shaman barita and basically the idea here for hodor is that you set everything up as a docker container with a standard docker config file so if you have a new tool and it's the new hotness you load it into your docker config file and then you run one command in hodor and it distributes it distributes your command across all of the machines let's say I just say docker up slash something and then I say tesla.com it'll stand up new docker instances for everything run all my tools simultaneously and then spit back the output to my host machine and so that's where I think kind of the future is of everything because then if I find a new tool I like it doesn't matter what language it's written I can just make a docker container for it and start it and bring back the output then all I have to do with that output is parse it a little bit on the command line and it's usable so I think hodor and kubot are what I'm going to use in the future there's also in the bug bounty world there's this idea of in order to hack these things I have to be the first to know about them when someone stands up a new sub domain I need to be the first to know about it so I can hack it and get a good bug asset note is a framework that will do some of this for you and you can write custom modules for and it will basically text you when it finds a new domain so I have a couple of bug bounty hunters who use this in great success they get an alert on their phone that says tesla stood up a new domain you need to go home right now stop whatever you're doing and go hack it before the rest of the 500 other thousand bounty hunters are going to go after it and so really it makes a difference it does when you're trying to do bug bounty full time for a living that's it any questions yes I haven't published this version of the slides I don't know you guys publishing somehow the slides at all I'll give it to them and they'll probably be on the recon village site you have a question yeah so there's also the idea of looking for that kind of stuff I haven't found any tooling that I've settled on yet that's why I didn't include any of it there's probably four or five tools that will scour the internet for misconfigured get directories and also misconfigured AWS S3 buckets and so I haven't really found one I liked yet in the presentation but that is a method that I do use yes yes I don't do anything from home anymore so I did a bounty a couple weeks ago and I went up against Akamai's WAF and if you get blocked by Akamai's WAF it blocks you from PayPal eBay all your banks I tried to register for a hotel wouldn't let me to that site my wife was really angry with me so yeah so I do everything from digital ocean yes is pretty much now they haven't bothered me yet what? nothing yet yeah so we have extra recon badges which are super sweet how am I going to do this somebody name off one of the tools I use in my methodology who said first over here it was you yes one of these what was one of these subdomain scraping things that I talked about I think he said it first sublister yes cool alright sorry I only had two you need the battery I also have a couple printed copies of proof of concept or get the f out so if anybody wants one of those they can just come up and grab one so I'll just put them right here cool really thanks for coming I really appreciate it