 Hey everyone, my name is Jason and thanks for joining us this morning. Super big thanks to the Red Team Village for having me and all the other speakers on and the staff for facilitating everything that's going on with the conference for DEF CON for hosting the village, a big place in my heart for DEF CON. So today I'm going to talk a little bit about my methodology for recon. I've been doing this talk called the bug hunters methodology for five years now and I basically update it every year and I talk about new tools and techniques in different spaces of the bug bounty scene and red teaming. So today we're going to go over the recon stuff. Now the bug hunters methodology TBHM is a presentation like I said running for a long time and basically it's too big for any one conference slot so I split it into two sections. One is recon which we're going over today and the other is application analysis which is on-site hacking and stuff like that. And so today we're going to do recon and you know I'm working on this year's version of application analysis, haven't finished it yet, I'm a giant slacker sorry. So yeah today we're going to go over recon. A little bit about me. So I'm a husband, father, hacker, gamer and I stream sometimes. These are my socials so if you want to reach out and ask a question after the talk or something like that just hit me up. I was a bug hunter and then I worked at Bug Crowd for many years and now I'm the head of security at Ubisoft and so I lead the security team there. Awesome team, great people. And I'm also a gamer at heart so I play some video games. These are my kids, Arcadia and Avalon in the top picture. I took them to DEF CON for the first year, last year did DEF CON kids or Roots. It was amazing. They had a great time and then that's my son Arlen, him and I being pirates in the bottom right hand corner. So it's a little bit about me. I've been doing pentesting for about well a long time, I'm old and yeah so I have a lot of related experience to this talk. So the first thing I want to talk about is today we're going to go over recon methodology and we do recon in red teaming engagements and we do recon in bug bounties primarily and you can do it some wide scope pentests as well. One of the first things I talk about is when you're doing an assessment, some sort of assessment whether it's one of those three, you have to have a way that jives with you to record your work otherwise you're just throwing darts and forgetting what you're doing and this is counterproductive to basically keeping organized. So the first thing that I talk about here is how I keep notes and I use a tool called X-Line which is a mind mapping tool and basically I create nodes and I create methodologies as checklist and set these nodes and I track all my domains and which ones I've worked on by color codes and so this is something I do. I've seen other people do a lot of project tracking for assessments in you know just notepad or vim or there's some specialized pentest tools you know if you're really fancy and you have a subscription you can probably do it in something fancy like draughtus or something like that. So there's a lot of tools but it's just really important to track your work when you're doing recon. Recon is the art of finding as many assets as possible related to a target and it can get pretty data heavy and dense as you do it. So having a way that works for you to track your data is the first thing that's going to guide you to success. So this is an example of how I track my data when I'm on a target. So Tesla, Tesla Motors has an open scope or pretty open scope bounty on bug crowd and this is how I would start off my project with Tesla right. So in the middle I have a node in this mind mapping program I use called XMind and then on the right hand side on the left hand side I have their autonomous system numbers or ASNs. I have any acquisitions they've made. I have some other notes linked in discovery and a link to their reverse who is information and the right hand side I'm starting to build out their root domains or their seeds. Roots or seeds are terms for things like something.com or Tesla Motors.com or Tesla.com or Solar City. So I've started to build out their seeds on the right hand side. So you can see Tesla Motors, Tesla, Solar City, etc. And as I do this I'm going to start collecting a lot of data. As I do recon on Tesla I'm going to collect a lot of data. So that turns into this which on the left hand side you can see for Tesla.com that seed I've enumerated all their subdomains and all of the live links. So these are all the live links for Tesla.com for their subdomains. So things like pages.tesla.com, shop.tesla.com, etc, etc. And then you can drill down into any one of these. And so if I drill into www.tesla.com then I have my methodology notes for that single site which inside of that node I keep all of the questions I ask myself when I'm basically looking to hack a website. Like does the site have multiple user roles? How does it reference users? How does it handle special characters? What are dynamic parameters? Does it have an API component? What kind of errors am I seeing? Does it have file uploads? Have I done JavaScript analysis for paths? Have I done content discovery? All these questions that you're normally going to keep in your methodology, I apply these all to each one of these subnotes. And then I color code my work based on where I am. And this is one of the most important things I think is that everything that is not filled in hasn't been worked on yet. So most of this hasn't been worked on yet. But everything in orange means I'm currently working on something and everything in green, which is there's nothing green on here I've already done. And so just the you know the way I can do this and then I can add check marks and other stuff and X lines to these mind maps helps me track where I am so I can easily put down a project and pick it back up if I want. Okay, so today's mission we're going to talk about wide recon. This is the art of finding as many assets related to a target as possible. So you're a red team or you're a bug bounty hunter and you have a scope of X company. And, you know, one of the things you need to do is identify all the websites that they own, because the more websites you identify, or the more infrastructure you identify, the better your chances of getting in or finding a bounty, right. And this is a core component in you know, both of those skill sets. And I break down what we call recon into a couple of domains. So finding first the in scope domains via the program or your project brief, your finding acquisitions for the company or the target, doing ASN enumeration doing reverse who is doing a whole bunch of sub domain enumeration and then doing port analysis. And then we're going to go into some related topics like some vulnerability scanning and some and some automation information related to recon, because you'll probably do it in this phase of your workflow. Okay, so you've decided you're going to do an assessment or you've been handed a red team assessment or you've been handed a bug bounty that's wide scope, right. That's where we are right now. So the first thing we need to do is parse the program page. So I'm going to use a bounty here, a couple of bounties to illustrate, you know, where you would start, right. So here is the brief page for the aforementioned Tesla for bug crap, right. So in their brief, they have star.tesla.com, star.tesla.cn, Tesla Motors and Tesla Services. And then they also have this catch all sentence here that says any host verified or owned by Tesla Motors that you can find, you know, is also included in scope. So this is what we consider a wide scope bug bounty. There's lots of stuff that you can do with this. So for your first four seed domains are already listed here, tesla.services, teslamotors.com, tesla.cn and tesla.com. So we're going to focus on these four and add them to our list. And then anything else we can find that is Tesla's is also fair game. So that's great. If you look at a plot from like hacker one, they also have a pretty wide scope bounty and probably the most prolific bounty available on the internet, which is Verizon Media. This is like the granddaddy, the biggest scope program I think I know of in the bug bounty scene. So Verizon Media has gobbled up so many brands and how they have so much infrastructure related to these brands and sites and subdomains. It's giant. So like a lot of hackers make their whole career hacking on Verizon Media, which is awesome. And they're a great team and they support the bug bounty community. They also have this catch all kind of phrase in their scope that says, if you found a vulnerability that affects an asset belonging to them, but it's not in scope, report it to the program. And so it's a really advanced team to work with. You know, you couldn't even fit all of the domains that you have for this program on one page, which I didn't even try. So you will parse your seed domains and your in scope domains from the program page to start off. And this will give you a starting place to work from. The next thing that I do when I'm scoping out a company is I want to find all their acquisitions. And there's a couple reasons I want to do this. I want to basically see if any of the infrastructure for those acquired companies has not been taken off the line or any of the subdomains or websites or integrations that were too hard to port over to new infrastructure are still online, right? This gives me more and more infrastructure insights to hack. So to find the acquisitions, I use a tool called Crunchbase. And Crunchbase is a business intelligence portal. Basically, you add any business name to this search box on Crunchbase, and you'll be able to look at that company and find out what are their employees, are their investor rounds, what acquisitions have they had, what are their socials, etc. So here I've used the example of Twitch.com. We're streaming on Twitch today, which is awesome. And Twitch, you know, Twitch as an organization here has an entry. And if I click on that, I get a lot of information about Twitch. And I can see that under their acquisitions tab, they have acquired four companies in the last eight years, Revlo, Clipmine Curse and Good Game. And then also they were acquired themselves on the left-hand side by Amazon. So this gives me a good idea of, you know, what they used to be, where I can also look for some infrastructure. If it's a recent acquisition like Revlo, for instance, you know, if I click on Revlo in this page, it'll take me to Revlo's page, which will give me their main domain, which is Revlo.com. So I might put that in scope of, you know, a wide scope, you know, assessment or a red team assessment, right? Acquisitions are important to keep track of. Now the one thing that you want to make sure here is that on some of those brief pages that we looked at in the last couple of pages, some of these are explicitly out of scope. So they'll say in the brief, especially for Tesla, Solar City is not in scope of their wide scope bounty. So even though we found that as an acquisition of Twitch when I looked on this page, you know, when I've done it many times, it's explicitly out of scope. So remember to refer to your project page or understand what is okay and what is not okay to hack or do recon on. Okay, so after I find the acquisitions, which has given me, you know, seed domains like Revlo.com and, you know, new seed domains other than Tesla Motors and Tesla and stuff like that or any of these, you know, top level seed domains, what I want to do is I want to find this company's autonomous system numbers, right? So every company that gets large enough ends up, you know, having a collection of networks basically applied an AS number to them. And an AS number, autonomous system numbers, just a collection of your known IP ranges for your networks, for your infrastructure. And so you can find someone's autonomous system number in many ways. There's many search engines to do this kind of look up. But I use one called Hurricane Electric or bgp.he.net. I use this when I'm doing manual searching because it has a free form text box at the top and I can just search the keyword Twitch. And here you can see, I found Twitch's two autonomous system numbers AS 46489 and AS 397153. And then they're associated IP ranges and IP before ranges and IPv6 ranges. So this is a good representation of the infrastructure that are the IP space that they own. And I can assume that it belongs to, I can guarantee that it belongs to them. I don't have to worry about scope really here because I know that all this is owned by them. So if they have that catchall phrase in their bounty project or you've got to go ahead in your red team engagement to open scope, and you know this is your target. One thing that this doesn't represent is their cloud assets. So this is all owned IP space. It doesn't represent things like AWS and Azure and GCP ranges. So you won't find stuff that they host in those environments inside of these ranges. So yeah, you just want to be cognizant of that. So here I've searched Twitch, I got their ASNs with every tool or method here in my methodology. I like to give you a manual way to do it because context is important and getting used to the idea is important. And I like to give you an automated way to do it in case you're going to script up your own recon framework, which is all the fad these days. And I have one of my own and everything. So there are two tools you can use to parse ASNs and IP ranges from companies that exist on the command line, right? If you're going to script this stuff up yourself. So one is called Metabigore. I'm still not sure if I'm saying that name of the tool, right, by Jesse JJJ. And this one utilizes the site that we just saw, BGP.hg.net and another site called ASN Lookup. And it will grab the information from those websites, scrape it off and give you your IP ranges for an organization. The other one is called ASN Lookup by Asin. Same idea. It uses a different data source called the MaxMind database. It will pull off their IPv4 ranges from an ASN. And you can give these both tools a term instead of an AS number or anything like that. So they will pull it off based on that keyword that we're looking for, in this case, Tesla or something like that. Now, one thing to know about these tools is they are looking off your keyword here, which we put on the command line, Tesla. There are multiple companies with Tesla in the name. There's like Tesla Research Lab or something like that in Sweden or something like that. So just be sure that when you get these ranges and you start seeing these sites, you can identify, oh, no, I might have gotten one that's actually not Tesla Motors. So both these tools, I verified, pull up the right data for Tesla. Just be cognizant of, since you're using a keyword here to search, that you don't get other IP ranges that are not your targets. You'll have to verify somewhat manually there. Okay, so we've gotten some IP space from the ASNs. We've gotten some seed domains. We want to continue gathering as many seed domains as possible. So we have the AS number and for here we're going to use Twitch as an example again. Twitch, their autonomous system number is 46489. And here what we want to do is we're going to get our first introduction to a mass. And a mass is a framework for domain intelligence, I would call it, I guess. And it is written by Jeff Foley. There is a workshop you should absolutely sign up for. He's going to do a full workshop on a mass and how to use it. Jeff is associated to the Red Team Village. It's super cool. And here we're going to use a mass and feed it our autonomous system number 46489. And what it's going to do is it's going to go out to all of the IP ranges that are represented by that autonomous system number. And it's going to scan the certificates for all HTTPS sites. And then it's going to parse the certificates and tell us, hey, here are the seed domains I found that are owned by them. And here you can see that we've found some seed domains that we didn't know about before. Justin.TV, which is the company Twitch used to be. TTVNW.net, which is I think associated to Twitch's streaming protocol. Twitch.TV, obviously, which is their main site, TwitchCon, which is their conference, and SocialCam, which I have no idea what is. So yeah. And all the links for these tools and the methods are posted in the slides so you can click through when the slides get released and go grab the tools and play with them on your own. So here we've already expanded our assessment a lot. We have a whole bunch of IP space. We have a whole bunch of top level domain seed domains. And we have the seed domains already given to us by our brief page. Okay. So the next method is reverse who is. And what reverse who is is it will take the who is entry of you can search your domain Twitch.TV here. And you can do this with a lot of sites. I use hawoxy.com because it's pretty cheap and you can get a free API key for so many uses. And you basically supply it your domain and it will give you the who is data. And then you can use that who is data to correlate who else has registered stuff. So an example here is we gave this site hawoxy.com Twitch.TV. And then it said that over the course of the years, there have been several companies' names related to the who is data and also registered emails. And so here you can see that the company as of March 26, 2015, Justin.TV and the email in the domain was domainmaster Justin.TV. And then later in 2015, it changed to Twitch Interactive. And if you click the button right next to those entries, you can see there's 20 other domains still registered to Justin.TV, which could be interesting to us. And there's 575 domains registered under Twitch Interactive. And so we can start pulling back more data, more subdomains, more root domains from these links when we click on them. Now, one thing to notice to know here is that who is data is is registered data. And so a lot of these companies will park domains to combat fishing and to hold stuff for marketing campaigns and stuff like that. So they might not be live sites, they might just be parsed. So this is a medium fidelity type of technique here, but I have found some really good stuff using reverse who is and the registered data of these sites. There is a tool to do this type of analysis. It's called DOMLink written by Vincent U by Security on Twitter. And it uses that exact site, whooxy.com. And you can basically give it a domain and it will find every other associated domain by both Regerson email and organization name or company name. And it'll do it on the command line and return it to you in a script and it's recursive. So really cool tool here. You can use this to automate some of that reverse who is lookup. It's called DOMLink. The tool is called DOMLink. So the next method that we have now that we've maybe gotten some more seeds and subdomains from reverse who is and we're building out our list of things that we can hack is we want to look at the ad and analytics relationships of our target site. We want to see what other sites are using the same ad and analytics codes as our main site. And this will give us a good understanding of maybe what are their most popular domains and maybe what are some of their less popular domains but are still using the codes and we didn't know about before. So every page usually embeds a Google analytics code or a new relic code or some kind of ad or analytics code in the main page. And so here if we use a site called builtwith.com, we can search twitch.tv and twitch.tv will give us a list under the relationship profile tab of all of the keys that are on twitch.tv. And then you can click on those and see what other sites are using the same ad and analytics codes and maybe we didn't know about those before. So here we can, it's hard to see on the bottom but we have a couple entries that we didn't know about before man versus game.tv, twitc.tv. We knew about twitch.com from the previous method but we're starting to build up more domains that we can see that are related to our target. So ad and analytics relationship profiles is a pretty powerful technique. I have used it for success a couple times. You can also do this directly from Firefox or Chrome. They have an extension for this site called builtwith and you can just install it. You make a free account and then you visit your site in the browser, click the Chrome extension button and it'll give you the relationship information right in your browser and you can start clicking around there. So pretty easy to institute while you're doing your recon in a browser. So you can also use a tool to do this in the command line. So Malik, after my first iteration of this talk, quickly scripted up an awesome tool to do this. It's called getrelationship.py and you just pass it your target here. Here he's targeted uber.com which also has a bounty program and pass it to this Python script and it'll pull out on the command line all of the related domains and you'll have to manually verify these. But the ones I would look at first are the ones that actually have uber in the domain somewhere like daft.uber or driveuber.co.nz or things like that. So you'll have to sift through this but you can do it on the command line now too. Then you can just use some general Google hacking or Google food to try to find associated sites to your target. You can use the copyright text, the terms of service and the privacy policy and you can just copy those from the bottom of any of the pages. And here you can see Twitch Interactive, Inc. would be something I'd Google for. And then you can see any other page that is hosting that text at the bottom of their page which could be something that you didn't know about before in your recon so far. So you can just do this manually and you can use other search operators like in URL Twitch or you can just look for these policies. One tip here is that you don't just want to target a certain year like here I have 2019 but you also want to check 2018 and 2017. And in fact the more stuff you find that goes back farther in age, the more likely it is to be vulnerable and less likely to have a lot of security assessment associated to it. So probably more vulnerable for your red team engagement or your bug bounty. All right, the next method that we're going to use to find some seed domains or related infrastructure is showdown. I have to admit I'm not a showdown ninja but I know a lot of red teamers are and I actually love showdown. It's just a failing of mine. I need to get better at it. But there's a ton of search operators we could use here on our domain twitch.tv. And what showdown is, it's the site that is a site that hosts information from an infrastructure-based spider. And a spider goes to every site on the internet and basically captures its HTTP responses, its technology stack, its certificate data, and cross links it so you can click on any of these pieces of information and find out other things that seem related and what organizations it's related to and what common tech that they're using, what web servers, what JavaScript framers, you could do anything. So showdown is a really powerful tool and you can query our domain here and try to find related infrastructure. For instance, in one of the entries when I just didn't even use a great operator here, I just searched my domain twitch.tv, I ended up seeing in the SSL certificate data of the search, there was twitch.amazon.eu. And now I need to ask myself a question, is that in scope? That's not verbatim a twitch domain, but in a red team assessment, if that's related to twitch infrastructure, well, maybe it's a target that I can go after. So showdown can give you a lot of good information. Okay, so we found seed domains, stuff to start with. Now what we're going to do is we're going to try to find subdomains for those seed domains. And so these are things like dub, dub is a subdomain, but also admin.tesla.com or forums.tesla.com or something like that or forums.twitch.tv or whatever. We're going to try to find subdomains. And so this idea of starting with seeds and then moving to subs really is going to multiply. The more seeds you find, the more subdomain enumeration you can do, the more subdomains you find or the more is related to how many sites you find, the more sites you find, the more successful you'll be inside of your red team engagement or bug bounty program. So for subdomain enumeration, I use three-ish different methods to get subdomain information. One is LinkedIn JavaScript discovery. Another one is subdomain scraping techniques and then subdomain brute force. And then there's some auxiliary stuff I use as well. So we're going to run into those right now. And this is the bulk of what a lot of tools are doing right now. And so you'll see a lot of favorites in these slides. So the first area we're going to use to find subdomain data is, or the first technique is LinkedIn discovery and JavaScript discovery. So LinkedIn discovery, what is it? It's basically using a web spider to visit a site like Twitch.tv, land on it, and spider all the HTML links, right? Pretty simple, right? And any of those that come back with a subdomain, we just add to our list of in-scope targets, right? And so this is pretty self-explanatory. It happens naturally when you're using tools like Burp Suite and you're spidering a site, but I'm going to walk you through how I do it. I'm going to use Burp 1.7 because I like the UI better, but it works as well with the crawler or the scanner in Burp 2.0. So use whatever you want, and then I'm going to give you some other methods to do it. So the first thing we do is we load up Burp Suite, which is on the right here. It's a side-by-side to our site, and we just visit our site through Burp Suite, the proxy. And Burp Suite, the proxy will capture all this data. We've only visited one page. If you visit Twitch.tv ever, you see that it has cross-linked and it does request a lot of stuff because it's a streaming media site and so there's a lot of stuff going in the background when you go to Twitch.tv. So you can see all that data on the right-hand side. This is everything that's either seen been linked on this page or it's actually been requested. It's actually been requested as the stuff in black, seen as the stuff that's in gray. And so what you do is you visit your page and then you have to set up a rule, some type of rule, and you can do this by going into the Target tab and the Scope tab. And then you click this box that says Use Advanced Scope Control. And what I do is I enter in just a keyword here in Advanced Scope Control, which I just say Twitch, right? So I want to see in Burp anything that has Twitch in the URL at all, no matter where it is, what has that word Twitch. And you could also add some more here because we've seen in our previous analysis, TWT is also part of their domain naming nomenclature, right? And so you can add a couple of these scope rules. And then you go back to your site map and you click on the ribbon bar up at top, or the filter bar, and then you say Show Only Inscope Items. So now this will just show in our site map all the Twitch.tv links, right? So we already have a good list of subdomains here, right? Sentinel.twitchservice.net, api.twitch.tv. So we have a lot of stuff with Twitch in the name here. We know that most of this is probably related to Twitch. This gives us exponential areas to hack, right? And this is pretty good. But if we select all these and then we spider them with Burp, using Burp Spider, it will then go to all of them and find their links and find subdomains references in their pages. And we can do this recursively until there's nothing left to find. So here I'm going to select all of these, and then I'm going to spider them. And now you can see, since I had them selected, everything in orange on the right-hand side was stuff I knew about before I spidered. And then after I spidered with Burp, everything in white is new stuff. And you can see I found a combination of a lot of stuff here. I found subdomains for Twitch.tv. And I've also found some new root and seed domains, like twitchapp.net and twitchservice.net and x-twitch.tv. So this is a hybrid technique. You can find new seed domains or roots and subdomains. And so now I have all of this domain data. I can now select all of these and spider them again and spider those pages for links. And eventually you can do this until you get kind of spider fatigue. And eventually you'll have a large sitemap buildup of targets, which is awesome. Now, how do you get this data out of Burp Suite? There's not a great way to do this. Pro has a thing called engagement tools and in engagement tools, you can do this thing called analyze target. And in analyze target, you can create an HTML report of all the targets in your site tree that are selected. So that's what I do. I select everything in the site tree. I go engagement tools, analyze target, generate HTML report. And then in the HTML report, I have a parseable list of the domains that it's seen at the beginning of that HTML report. And then I take that and I put it in a text file for later analysis. And then I dump it into my mind map as well. Okay. So the whole the whole link discovery thing counts on Burp spider, right? A lot of times I hadn't seen, you know, before last couple years, great options to do this in the command line because, you know, require a lot of custom coding, you know, bash scripting or Python to create my own spider and do that same process. Well, now there does exist some tools that are command line spiders with bug hunters and red teamers in mind. There's two. There's one called Ghost Spider written by Jesse JJJ. And it also designates the times of things it's parsing when it's visiting a URL. So it'll give you hopefully you guys can hear me. Headphone problems going to plug it in. You can hear you. Okay. Great. Great. Great. Great. I was getting beeping in my ear. So maybe it's just running batteries. My bad. Okay. So you have two spiders here that you can use in the command line. Ghost spider and hack crawler by hack loop. And both these are awesome. They both have some functionality that will parse out the types of things you're getting like JavaScript files, subdomains, URL endpoints. Some of them will give you, I think, parameter names and stuff like that. So you could institute the whole process of link discovery by scripting up Ghost Spider or hack crawler if you wanted. I still use Burp, but these are invaluable tools to have at your disposal, just like a crawler that can do some analysis. So keep them bookmarked for when you might need them. Okay. So the next place we're going to get subdomain information is by analyzing some JavaScript. And one of the tools I like here, just because it has this kind of little added benefit that I think that I haven't seen many places before, but maybe some other tools are starting to implement it now, is a tool called subdomainizer by Niraj Edwards. And what it'll do is it'll take a JavaScript file, you have to point it to a JavaScript file, and it will parse out all the cloud services, the subdomains. And it'll do this little extra thing where it uses the Shannon entropy formula or algorithm to identify things that look like API keys hard coded in JavaScript, which is already kind of a vulnerability. If you find a private API key or credential hard coded in JavaScript, which I know sounds crazy to a lot of people, like you would just find a hard coded private API key, but this happens all the time, like all the time it happens. And so this, this is what I like to call like a forward thinking type of thing is using like an algorithm like this to identify keys. Sometimes it's a little noisy, but a lot of times it finds you good stuff. So, so I like this tool to point at all the JavaScript files I found already, you know, on, you know, on some of these sites. If you're just looking for subdomain information, there's another tool called Subscriber by Celian Collins, which has recursion built into it, which can do this method as well, but it doesn't do the API key part. So point this at JavaScript files you find on your site, and you can get back a whole bunch of subdomains. And cloud, cloud services that the site might use or the target might use. All right. So that's link discovery and JavaScript analysis for subdomains. Now we're going to get into the big meets of what like a lot of people do is subdomain scraping. Now the idea of subdomain scraping is going out to these websites on the internet that have search boxes basically. So, you know, there's all these projects on the internet, search engines, security websites, certificate, like projects and stuff like that. They all do different stuff, right? So senses or like Robtex gives you infrastructure information, the Wayback machine, you know, houses information about domains and their responses, you know, years past, and, you know, everyone's probably used the Wayback machine. The certificate sources down here, some of them are certificate projects to provide certificate transparency. Search engines, obviously, you know what search engines are. And then there's a whole bunch of security sites that, you know, do different things, like give you a rating on how malicious a URL or a site might be. Right. The common thing that these all have is either have an API or a search box where you can put in a domain and they will search that domain and they will tell you anything they've seen related to that domain. And if you put in a domain like Tesla.com, the information that comes back is parsable and you could possibly find out that they know about some subdomains of Tesla.com that we don't know about. So this is a process of subdomain scraping. We're going to go to all of these sources, all of these sources and ask them, hey, what do you know about Tesla.com or Twitch.tv? Do you know of any subdomains that maybe I don't know of? Okay, cool, let's do that. Now, there are many more sources that are on this page, right? There's, these are only a subset and new sources to parse are coming out as fast as new websites are coming out. So the tools have to keep up adding new novel sources to find subdomain data or URL data, as they mature. Now this is the example of using a search engine like Google to do it, right? So here what we're doing is we're searching with the search hopper site that Twitch.tv and we're saying, I already know about Twitch. www.twitch.tv. So we're saying minus www.twitch.tv and we already know about watch.twitch.tv so minus watch.twitch.tv and minus dev.twitch.tv because we already know about that. So now Google is only showing us things that are not those and so we can do this process keep on minusing out domains until Google doesn't know anything more about subdomains related to Twitch.tv and that way we can basically get a full inventory of what Google knows and what Google knows about subdomains for Twitch.tv. So this is the, an example of doing it manually with Google but luckily you don't have to do this yourself. There's tools out there that will do this type of analysis, this scraping for you. The first one is AMAS by Jeff Foley and the AMAS team, there's a whole team behind this and there's two tools I use here, AMAS and subfinity but we're going to look at AMAS first. AMAS has probably the most sources for any subdomain scraping tool in existence in its enum section, right? AMAS is a framework for domain information but the one that actually pulls this method's results, subdomain scraping is called AMAS enum and here on the right-hand side you could say we gave it Twitch.tv and it went out to a whole bunch of sources it's had in its databases and it reached out to those web pages and using curl or whatever. I don't know if they're just like headless chrome I don't know exactly what they're using and then they parse those pages for subdomains for Twitch.tv and they give us a list of all the subdomains for Twitch.tv and if we do this on all of our seed domains that we've gathered, we have now started to exponentially build out the amount of subdomains without the amount of sites that we have to attack. So this tool is invaluable. AMAS is becoming the go-to tool for all subdomain enumeration especially this thing, subdomain scraping. The other tool I use here is oh also this is still AMAS what AMAS does at the end of a run not only does it give you all the subdomains but it also gives you this great table which I feel like is underutilized a little bit and this table tells you okay I discovered 439 subdomains and here is where they were inside of these ASNs and you can see that most of Twitch's were in Amazon's ranges which made sense because they were acquired by Amazon but you can see some of them were in other ASNs right and I've had instances where not on Twitch but on other projects where there was a whole ASN related to the company I didn't know about that I found out because AMAS built this table they're like hey you know 70 of these subdomains we discovered appeared in this ASN or these IP ranges and then I look at that and I'm like oh I wasn't initially looking at those ranges weird let's go back to the beginning of my workflow and start enumerating those ranges so this table is super cool it also gives you information on their third party kind of tools they're using like here you can see that Twitch is using Bitly and SendGrid and some other stuff so it also gives you information there in Fastly okay the other tool I use for subdomain scraping is SubFinder originally written by Iceman and Michael Skelton I think now man navigated to the project discovery.io team which is a group of bug hunters releasing some stellar tools and they also have multiple sources extensible output also a really great tool both of these tools are really good they have some each one of them have different sources so what I end up doing is in my automation I run both of them and then I cat the output together and sort it and unique it from both the tools so they both have different you know a couple of different sources and a lot of the same sources so that's what I do for my stuff is I just use both of them okay so this one is somewhat new and it has been integrated into a mass a little bit but I've had inconsistency with run between this standalone tool GitHub subdomains.py and the output of a mass so I still use this independently inside of my automation when I'm when I'm looking for subdomains so this tool is called GitHub subdomains.py and it's still scraping and what this is doing is it's going out to GitHub as a source and basically you're providing GitHub and if you've ever been on GitHub they have the search box up top and you're saying search for twitch.tv and then anything that comes back with twitch.tv as a piece of source code it will parse out the subdomains from that piece of source code now this this is written by Gwendole Kuik and I still don't know if I'm saying Gwendole's name right and that's okay and Gwendole wrote a whole suite of tools for GitHub enumeration how to find secret keys in GitHub related to an organization how to pull email addresses out he has a wonderful blog and it's linked here in this slide with a whole bunch of GitHub tools now GitHub subdomains is just looking for subdomains the thing about using this tool is that the GitHub API not the tool is just kind of unstable and doesn't give you like somewhat returns rate limited results sometimes so what I do inside of my automation and this has given me subdomains I haven't found anywhere else using this method is I run GitHub search or githubsubdomains.py like five times and I sleep in between each run so that I give it a little while for the rate limiting to die down and then I give it a big sleep at the end before I run another one and then that seems and then I cat all those results together and unique them and that seems to give me more consistency when parsing the API and it has nothing to do with the tool like Wendell wrote it's it's all about what GitHub does with their search API so this is an awesome tool to find lesser known subdomains and I found some great stuff using using this tool okay the next one is show sub go by incogbite and this one will parse using a number of search operators with your API key it'll parse showdown so I include this in my script as well again this is one of those tools that some of the subdomain frameworks like amass and subfinder might do for you I find using the standalone tool for some reason just works better for me so I don't know exactly why that is but I've run them side by side and gotten different outputs so I still use this verbatim and it's also it's a fast running script it's not like I'm waiting for it to complete for five ten minutes or something like that usually runs in a minute or two so it doesn't get off my nose if I have to wait an extra couple of minutes to ensure that I get coverage out of showdown so show sub go will take a domain and your showdown API key and parse showdown using some search operators and give you back all the subdomains related to here twitch.tv okay so the last well not the last but one of the other methods to look for subdomain enumeration or subdomain scraping is the cloud ranges right so we talked about this a little bit earlier is that we have like all this IP space and now we've started to identify subdomains and you'll notice that some of those subdomains are resolving to infrastructure sites in the cloud now there's this idea that not a lot of people had been doing is just going out recently or not recently but people have been doing it for a while but it hasn't been much public is just scanning the entire ranges for AWS GCP and Azure for SSL sites right anything that responds to anything that responds to you know or responds to a connection on 443 and you basically scan the cloud ranges and you you scan it by IP IP address and then when it responds it'll give you its SSL certificate and then you parse the organization name or the domain name out of the certificate data and then you look at that data and you say does it match my target twitch.tv or does it have the keyword twitch in it and you're like cool I found some stuff that these people have put in the cloud that wasn't part of their ASN that I probably didn't know about before now that's a tremendous amount of scanning right those ranges are huge and you can do it yourself with some tools like mass scan and you can script it up yourself to scan but and there's a guide here by Dehe park which outlines doing it yourself scanning those ranges but it's going to cost you a little bit of money on your VPS that you're using etc. There's a service by Sam Erb and a death con talk he did two years ago where he created a service called buffer over dot run it's an API and you can give it a domain and it will go out and he does this scanning every once in a while and so you basically take that data you parse it and then you get a list of of sub domains that were in the cloud ranges now I think he runs his scans every two weeks and updates the service so it's not exactly live data and this is one of the sources included in a mass again I just wanted to outline the single shot tool here you probably could feel safe using a mass to get most of this data back but yeah it's you can use the service as well so this method also finds some really good stuff right a lot of people are putting shadow IT infrastructure and registering websites on a credit card or you know just not paying attention or thinking that anybody will find their cloud infrastructure because they've never published you know those domains anywhere other than internally and so you can find a lot of things you know for your target organization that are just sitting in the cloud pretty much unsecured in fact it's been a big part of my research lately is scanning the cloud ranges and just finding wickedly under secured stuff because people just don't think you'll ever find it so this is a good method all right so we've scraped a lot of stuff to find subdomains now we want to brute force for subdomains this is just a you know tried and true method that I'm sure every red team or pentester has done before usually with a tool like fierce or you know one of the other tools like that in the past 10 years you just tried to give a word out of a dictionary and add it before your company dot company dot com and see if it resolves right it's pretty pretty simple method now iteration in this field has come along in the last four years where you know a lot of our the tools that we were using to do this were great but they were using one DNS resolver one one DNS server to resolve and it took a long time I remember running fierce as a pentester you know 10 years ago and it just taken so long to finish a large dictionary to do subdomain brute forcing and in the last you know I think four years this the idea of using multiple resolvers to speed up the process was pioneered by mass DNS and mass DNS was the king for a little while now a mass also includes the idea of using multiple resolvers so a mass uses eight DNS resolvers to parse DNS data when it brute forces by default and you can add even more by using some flags so here we're just running a mass over you know twitch again and we're doing brute forcing this time with the with the dash brute option and then we're adding the source so we can see where some of these came from with the scraping part and then you can specify the number of resolvers with dash RF so RF means resolver file I think and if you basically give it a list of DNS resolvers or DNS servers that you trust it will use more than one to do your your brute forcing so this speeds up your brute forcing significantly I've also heard that another alternative to using a mass for this is AS DNS Brutes I haven't used it yet but I've heard it's also wicked fast and has multiple resolvers so if you wanted an alternative shuffle DNS by the project discovery team also does this it's I think it's a wrapper around mass DNS it is it is a wrap around mass DNS and if you prefer to break out that type of brute force from a mass for whatever reason whether it's stability or a mass you know is taking too long for you I haven't benchmarked the tools side by side but a lot of people like shuffle DNS as well to do subdomain brute forcing so a subdomain brute force tool is only as good as the dictionary or word list you give it right because it's just trying to resolve a whole bunch of words in front of your target right twitch.tv and so in this mind there's there's two kind of ways that you can approach what word lists you feed to these tools one is a tailored word list where you can build one based off of words that appear brand names that appear words you know you can build like a contextual based word list to to your target which Tom nom nom who's a prolific bug hunter awesome human have a lot of respect for this guy and the tools he makes and the contributions he makes to the community he did a talk at Naham Khan which is a conference a little while ago now a couple months ago where he did a whole talk on word lists and how to generate contextual word lists for your target so I suggest going to watch that if you want to make some tailor word lists it's called who what when wear word list and then you can also use a massive word list and so over the years we've had many many tools that do DNS brute forcing I went out and I took the word list for all of them and put them into one and it's called all that TXT it's linked in this presentation people use it it's on my GitHub and it it basically sort and uniques all of the DNS names that you've seen forever now it has a lot of crap in it it's true it's a lot of lines it's like a million there are two million out of camera it's a ton with using you know the multiple resolvers it actually doesn't take too long to do the brute forcing I don't necessarily care if I'm sending crap to a DNS resolver like you know like 0001.twitch.tv like obviously that's usually not going to resolve but the list has some gems in it that I just can't get past you you know like it's efficacy so I use all that TXT when I'm doing subdomain brute forcing and I think it's pretty good there are some newer school pieces of research to pull out subdomain enumeration write those that file that I made parse out all of the kind of subdomain brute forcing tools that it existed for the last you know 10 years or something like that and put them into one file but there's some new research by the team at Asset Note which is a attack surface mapping company and they did some cool research using Google BigQuery to generate and discover subdomains that were used on like the Alexa top 10 or Alexa top 1000 or 10,000 or something like that or Reddit or anytime someone referenced that you were on Reddit what was the subdomain in that link or Stack Overflow anytime someone referenced a URL on Stack Overflow they parsed these sites with BigQuery and then they made these you are these subdomain lists that you can use so they call this project the common speak project I've integrated common speak one into the list all that TXT but common speak two came out a couple years ago it's not in all that TXT it's got some other sites that they decided target I recommend checking it out for kind of newer school you know research on what subdomains names look like then there's this idea of alteration scanning this is a type of brute forcing so you have dev.company.com but you could also have dev one dev two or dev dash one or dev.one.company.com and so this technique was pioneered by Shubs and Nathy when they wrote a tool called Alt DNS it's now been this permutation or alteration scanning whatever you want to call it has been built into a mass so you can you can use a mass and it will try to find these naming conventions for you and it will sometimes give you gold because people name stuff predictably in their subdomains some of the things I've done with permutation scanning is bypassing web location firewalls so where I've had SQL injection on a main target and getting blocked by a web location firewall I've managed to find a permutation like dub dub two and then managed to bypass the firewall because it wasn't applied to dub dub two I've also managed to bypass things like Akamai by finding their origin via predictably named alterations like origin dash sub or origin dot subdomain to bypass filtering to go to the source so these are some things you can use that origin scanning to do as well as just find you know new surface to attack all right so now we're going to go into some other stuff that's related to Widescope Reon we have seed domains we've got a lot of seed domains a lot of subdomains right now and we should have a pretty good map of of the infrastructure that belongs to this company all right so one I didn't talk about and this is new to this specific talk that's why I called it 4.002 is favicon analysis so there is this idea that every page in the tab at the top of your browser has a favicon right you see that little tesla in the bottom left hand corner of my chrome tab that's a favicon now favicons are little images and what you can do is you can take a hash of that favicon and then you can search for the hash of the favicon on shodan and shodan will then show you every other site that has that favicon which will find you some gems because people tend to reuse favicons on a lot of their sites of domains you might not have seen before there are some newer tools to do this I used to do this a lot in my recon testing I then took it out for a little while and I recently put it back in so favicon analysis is kind of a fringe technique but it's pretty cool so there's a new tool called fab freak out by Devanash Bhatham they're at twitter at azimotius and basically he does a couple things here he parses from shodan he'll also he'll also look for hashes in different places but he also uses the hash he also has hashes for common infrastructure like a scanner would to look for different types of infrastructure so on the bottom right hand side you can see that he has some fingerprints on different types of things like spring boot pages or big ip pages or slack instances they all have common favicons and so when you scan a list of URLs or a set of domains on port 80 or 443 and you retrieve their favicon and their hash matches one of those hashes you know oh shoot I've stumbled upon a slack instance for this organization and you may not have found that doing anything else you can also do the same thing if you scan the cloud and you correlate one of the previous techniques with this one let me find things that have my domain and the certificate data and have these favicon hashes associated to them so this is a fringe kind of analysis technique to find even more kind of esoteric related sites okay so then we're going to go back to a tried and true method is port scanning right we have a lot of domains now we have a lot of C domains and sub domains and sites that we can work with we want to port scan them because they may have services that are not 80 or 443 on these pieces of infrastructure and so the the tried and true hacker education hacker education will tell you to use end map here but I use mass scan because I believe it's faster it has a rewritten tcpip stack true multi-threading it's written in C directly calls a lot of stuff so mass scan in my experience has been faster than using end map even with flags like min parallelism which there was a huge debate on twitter the other day of which is faster min parallelism or using mass scan and people are like whatever when I run when I run a scanning tool a port scanning tool across you know 400,000 hosts I have always found mass scan and it's it's syntax it's it's advantages to beat out end map so that's just my personal experience if you really like end map to do this you totally can and this is strictly for finding ports it's not to do service scanning right end map obviously wins in those areas if you're gonna do banner analysis service scanning script scanning like that all that all pans down mass scan only does one thing finds open ports that's it so what I do or also if you want to learn how to use mass scan Daniel Meisler I'm one of my best friends in the whole world he wrote a study guide on mass scan and all of its syntax it's one of the best ones I've seen even better than man page so go check that out and how to use mass scan and how to set the flags correctly and you know how to scan so so what I do is I'll take mass scan and then the problem about mass scan is it only scans IP addresses it won't scan a domain name so you can use a tool called DN mass scan which will convert your domains that we have right we have sub domains and seed domains and they'll convert them into IP addresses and then scan them with mass scan and tell you all the open ports on each IP address so what I do is I take mass scan and I scan that over all of my sub domains and then I feed that output because I know it's open to end up to do service scanning and service scanning will start to give me more information on the services that are open on those boxes and then what I do is I do a quick default credential spray across everything that has certain services open which is FTP, SMTP, SSH, Telnet, any type of SQL database basic authorization and some other stuff and so this is a tool that I used to do that it's called Brute Spray Brute Spray will take the output of your end map scan so you feed mass scan to end map and that does the full service scan and outputs an XML file and then you feed that XML file to Brute Spray to do a quick credential a default credential brute force against all of the services that allow you to do that and so I've found some good wins doing this logging straight into SQL databases and SSH and just some horrible stuff that people leave unsecured on the internet so on services all right so while I'm doing all of this stuff right we're in kind of the other category right now so while I'm doing all of this analysis and I have most of this automated in a giant ugly shell script that I use and we'll talk about recon frameworks in a second but while I'm doing all this it takes a little while for these tools to run you know the conglomeration of tools and techniques we've already talked about takes anywhere between five and 15 minutes maybe a little bit more if I have a lot of sub-domains or that's a big project it'll take to run all of these tools and give me output so while I'm doing this I do a technique called GitHub dorking and this has found me many many great things and what you do is you basically just go to GitHub and typing your domain as a search operator so twitch.tv and you go to GitHub and you type in twitch.tv and then you just start browsing source code that has reference twitch.tv or tesla.com or whatever teslamotors.com and you can find all kinds of sensitive data that former employees or current employees have accidentally put on GitHub and so I have built a script just to build these search queries for me these dorks I call them and it's right here in an address you can grab it and it looks for things like mydomain twitch.tv that I'm looking at right now and password or npmrcauth or dockerconfig or pemprivate for certificates or s3config or htpassword or credentials or bashrcprofiles or sshconfig and so it searches these key terms along with the domain and if it comes up that someone has accidentally you know put this on GitHub it's usually automatically a finding and it'll help me get into other systems let me give you an example here of something that happened in the real world I found an admin page for a site I couldn't do anything with it you know I fuzzed the form great didn't work etc etc then I found some dude who had basically posted a password not for that specific site but it was related to my domain on GitHub for some other system that system was internal so I couldn't access it but then I tried to use that password and username on this admin portal I had found that I had no look before bam got in stole credit cards millions of you know records of data it was game over at that point so so this can can help there's a whole awesome talk here called github recon and sensitive data exposure on bug crowd university by the gentleman it's probably the best primer on doing this type of dorking to find sensitive stuff on github that I've ever seen I highly recommend you check that out and then Gwendal also wrote a github search tool that allows you to do this in the command line and not in the browser which is okay so now you have a bunch of subdomains and I think I'm running close to time so I'm going to try to hurry it up here you have a bunch of subdomains here and now we want to prioritize which ones we test right so what you can do is feed all of your subdomains to a tool that does screen shotting and there are several tools that do this currently I use eyewitness I was using an aquatone I go back and forth but there's four tools here aquatone H2P screenshot eyewitness and witness me all are tools that help you take screenshots and then some do like additional analysis on your domains that you feed it but more normally I just use them for screen shotting doesn't matter which one you use try them out see which one you like I like eyewitness I feed it a list of domains it will take it will take those domains visit them with the headless browser take a screenshot of the page and then I will just look at that folder and see okay which one of these you know looks like I want to prioritize first is it you know does it look like it redirects to the main site well obviously I'm not looking for the main site right now is you know it's some kind of back end admin portal okay I want to prioritize that so screen shots can help you prioritize your work when you're when you're you're given a large list of subdomains like you have now you can start to look for some vulnerabilities one is subdomain takeover there's a repo called can I take over x by z by at overflow and it gives you a list of all of these services and their fingerprints that might indicate that you can take over that subdomain I'm not going to go crazy into subdomain takeover because it's short on time obviously but you can use a tool to check for subdomain takeovers the best one right now is nuclei and subover I think or my two favorites right now subover was an independent tool has been since ported over into the nuclei framework that project discovery is making but go check out nuclei it has the most the most subdomain takeover checks that I've seen for any tool and I would check that out and you can run it just across a list of large domains and it'll tell you hey possible subdomain takeover at this address so this is another wide scope tool that I use in in the end part of my recon when I have all these subdomains and you can find some volums just straight off of using this okay so we're going to blow through automation real quick okay so when you you have some of the tools and you know you have a long methodology like I have you end up automating it right I write a horrible bash script to do my stuff it works for me but it could be better some of the tools that I use don't do certain things like they're not threaded they're they don't take certain types of inputs like list inputs or glob notation like nmap does or range notation like nmap does and sometimes I need to feed a tool with another tool and so I can't I can't do that so Michael Skeleton also known as Kedingo wrote a tool called interlace which basically wraps around other tools and lets them do those things it will thread them it'll allow you to take different sources of input it'll allow you to distribute tools here is an example of a blog written by Hackloop which talks about basically threading Nikto which Nikto doesn't support inter or inherent threading so you can use interlace to basically glue together a lot of stuff that your other tools don't do any tool written by Tom Namnam is awesome in automation right? HTTP probe Wayback URLs Meg Tom Namnam has several talks that they're talking about his tools and how they work I use HTTP probe a lot for the glue between me finding sub domains and you feed it a lot of sub domains that you have found after your initial analysis and then it tells you which ones have actual live listening web servers associated to them so that's HTTP probe Wayback URLs will find you URLs associated to any old URLs associated to any site that you're currently looking at Meg is like a directory brute-forcer but for many hosts and I use this to find some stuff too so all of Tom Namnam's tools are just amazing All right, last part I promise Omar, are we okay to finish this last little bit? Absolutely Go ahead Okay, cool, cool Awesome Okay, so maybe recon is not your thing right? I definitely run into hackers who are like this is the most boring part of assessment to me and then I run into other people who are like yeah, recon is awesome it's like my favorite part of the assessment so it could be that recon is not really your thing finding all these sites is not your thing hacking the sites is more your thing and that's cool there are a lot of new tools out here these days that basically automate all this for you and they're called recon frameworks so if you've ever looked at a video game before you know like Diablo right every once in a while like content creators will make these tiers of builds for Diablo or something like that so I've put recon frameworks into a couple of tiers C tier, B tier, A tier and S tier C tier or recon frameworks that are built around scripting up other tools in Bash or Python their step place they don't really have a workflow they only have a few techniques and they're not really super extensible but they work really well let me put out a big disclaimer here my tools that I use personally on all bug hunts are C tier tools and they work for what I want and they do and they're great so there's nothing that's bad to say about a C tier tool a B tier tool in my mind and this is all very rough right like you know these classifications are not in any way like super serious but a B tier tool in my mind has you know some of its own modules it's doing some of its own sources maybe has a GUI maybe has some workflow in it where they're bringing data in from the end of the recon process back to the beginning if they find new stuff it has more techniques than a C tier it runs but it still runs that point in time and it's still working on flat files to track the data for recon A tier is maybe writing all of their own modules has these tools start to have some GUIs some of them they run iteratively so they're like you know like Kron or something like that they'll run out of schedule and they start to manage things via database so you can compare scans and recon scans over time and then S tier are kind of the highest level of what's out right now they write a lot of their own modules they have a GUI they run iteratively they manage all their data via database they scale across multiple boxes to make the scanning faster they send alerts back to the user via email and text and slack and whatever when they find new things they have some novel techniques that not a lot of people are doing so these are how I classify some of the frameworks I'm going to show some of them to you now and you can pick one that works for you if you're not into this recon stuff warning I had to put this in here I don't want to hurt anyone's feelings right like I am scaling some tools and giving them grades it's not serious it's my rough experience my gut feel all these tools are wonderful and I respect the authors of them so darn much of just putting code out there it's yeah I can't say enough about the people that open source or code and even see to like I said my tool is a C tier tool so and it works for me so this is just subjective and based off my own experience okay so see some serious tools that you could look into that are wrapping around a lot of a lot of other existing tools one of the ones recon one of the ones called ultimate recon here I like is listed here and the reason I like ultimate recon is because it's using the new nuclei scanner that's out by project discovery and it will it implements finding subdomains using a couple different tools it will then take all of those it will port scan them and then it will run nuclei templates on them so it's it's pretty good I like it all of these different C tier tools can they're all choosing their own tools to wrap around that they like so there's there's no best one really but I tend to like ultimate recon in this one there's also there's a couple other good ones in here too so check these out B tier frameworks this is a lazy recon by captain Milo and here you can see his workflow very similar to what we talked about in this presentation a mass and subfinder combine those two you get a final subdomain count then you start to do some vulnerability analysis with subjack and core scanner to find some vulnerabilities related around subdomain takeover and core's vulnerabilities then you start brute forcing then you start port scanning then you do some screenshots and then you have a final you have a final output so a lot of stuff we talked about today is implemented inside of this workflow so lazy recon is one I like to point out inside of the B tier here A tier there's a couple they all teeter on being S tier right like I I wonder if I should even really have like an S tier and an A tier usually the thing that's separating some of these is gooey from the other one so the one I have referenced here on this slide is called find domain and find domain does pretty much everything it does subdomain discovery and scraping and brute force and all of that via a mass and subfinder and asset finder and everything and then it stores it in a database it does iterative scanning it'll text you if it finds new domains from its last scan it'll email you which you see in the bottom left hand corner which is great the only thing it lacks is a gooey it's a command line based tool so you know kind of teeters on doing a lot of the stuff we want to do but and in the enterprise version it does port scanning and screenshots and it's a that version is a for pay version but it's affordable to bounty hunters and some some of those are really good subscriptions to pay for so find domain pretty cool okay here's our S tier framework some of them you'll never get to use because they're commercial products but I include them here to give people ideas as to like what they should be working towards in their frameworks intrigue.io is a asset discovery tool written by Jonathan Cran who's a friend of mine he he builds intrigue and he has a gooey for it and it's a SaaS platform and businesses buy it to map their attack surface but it's doing the same techniques that we've talked about today the cool thing about intrigue is even if you don't pay for the subscription of you know like the SaaS version that he's selling the businesses he opens sources almost all of the code for intrigue.io so you can run out you know your own hosted version or at least see what he's doing in his kind of analysis so intrigue is pretty cool asset note is another one that's a B2B type play for recon right it's a team of former bug hunters who made this awesome platform this is probably the gold standard of of asset management or attack surface management tools you can see that it's it's pretty it breaks things up by you know assets that need attention it's got a lot of graphing it does a lot of custom vulnerability checking so asset note is is probably it's S tier but you know it's a high price tag so you're probably not going to use it as a bug hunter red teamer but it is kind of the gold standard that a lot of the tool makers should probably work towards spider foot is another one that does a lot of OSINT type domain correlation information does some of the techniques we've talked about in this presentation they're adding more all the time I know the author is really strictly interested in adding more bug bounty focus features in it so and it's got a great graphing library and monitoring and everything so spider fits pretty cool the unreleased project discovery we've talked about project discovery a couple times in the presentation that team their unreleased framework looks to be a killer looks to be awesome I'm really excited about it when it comes out so yeah you can see here dashboard of a project all your discovered domains and seeds what the scan processes and then if you dig into any of those you get screenshots port scans technology identification of the site its activity its changes over time so the discovery framework when it comes out will be pretty cool jails is a vulnerability scanner written by Jesse JJJ and team it is also really good it is a gooey flow that wraps around some fan line stuff it does vulnerability scanning which you know if you're working with Nessus or you know not a lot of red teams or verbatim scanning with Nessus but if you want to look for some very pointed stuff you can use jails and look at their kind of CVE checks that they use and you can see signatures there for the Zoho management page and fuel CMS RCE and so they're looking for some cool of own jails I really like it Osmedius is awesome as well it's very similar to some of the previous pages we've seen it'll take all of your domains it will do it'll take all your subdomains it will resolve them to IP it will do port scanning on them it'll tell you the technologies it will do screenshots it'll do all kinds of stuff so Osmedius is also pretty easy to stand up and really good HunterSuite.io also same kind of idea you can see technology parsing services domain discovery etc bounty.offensive.ai same type of deal right includes some vulnerability scanning technology fingerprinting subdomain finding and then attack rafting Reengine is the one I used last week or a week before Reengine is pretty sick you can have projects in it that are separate it'll do the subdomain scanning it basically takes the methodology that I outlined in this presentation and turns it into a tool a hosted tool and they're making some improvements you know every week so lots of potential on Reengine ReconEngine which I highly suggest checking out and then Scout PDP on the SecApps team they've been working on tools for well over a decade and they have built out Scout which is a paid offering but it's affordable enough for bounty hunters same idea write projects subdomain enumeration related domains reverse DNS it supports screenshotting it's awesome so awesome tool and yeah and then lastly nuclei I talked about it very briefly but anyone who's not using nuclei in their bounty scanning right now is at a disadvantage they're coming out with some epic templates for CVE identification and you can basically build your own via a YAML file and it's a scanner that'll go out and scan all your domains for vulnerability subdomain takeovers all kinds of stuff so I really have been using nuclei to great success lately and that's it that's all I got so that's the bounty hunters methodology and thanks for giving me the time today I really appreciate it sorry I went over a little bit no worries thank you so much for the presentation and amazing support and once again you know from the bottom of my heart thank you for supporting DEF CON and the DEF CON Red Team Village and thank you for having me and for everyone here you know please take a look at all the talks and activities that we have in our website we have the CTF of course the cyber raid contests as well tons and tons of activities throughout this weekend the link should be in the description in the bottom of your stream know that