 Thanks a lot. Hey everybody. I'm Richard Gold, as mentioned, director of security engineering at Digital Shadows. This is my first time talking at Reconvillage or Defcon and indeed my first Defcon. I'm glad the shots are here to help me through this experience. I'm here to talk to you about asset discovery. As part of the Recon process, you need to discover assets. That's something we've spent a lot of time thinking about in the last few years. Byddwn i'n cynnwys, mynd i'n mynd i'n trafnio, rydw i nesaf, ac rydw i'n cael eu gofynhaol ar hynny. Felly mae'r ryddyn ni'n mynd i'r ffrind yma o'r awker ar y cyhoedd cyfnod i'r cyffredin sy'n mynd i wneud, ac mae'r cyffredin ni'n ei gweithio i'r gwerthu ac mae'r cyffredin ni'n mynd i'n gweithio i'r gwaed. So, when you're doing your OSINT reconnaissance, you've got a target, you're going to do some investigation, you're going to get started, it's tricky to define the scope accurately. So, there are, for any particular organisation you want to look at if it's of a reasonable size, especially these large, sprawling, acquisition-driven organisations, companies who've bought a whole bunch of companies over time, they've maybe integrated them, maybe not, maybe there's infrastructure left over, maybe there isn't, they're in different countries, different jurisdictions, you have this huge, sprawling mess. And so, figuring out what that scope is can be really challenging. And another issue on top of that is cloud, of course. So, it's a thing, I think we've all got to that point by this stage, and we struggle to find then what belongs to whom. For example, if you have an IP address, and you're thinking, well, this is an IP address, it's likely associated with my target, and then you find out that the day before they released the IP, it's gone back into the pool, somebody's pulled it out, spun up an NFS server, and now you're looking at some crazy vulnerabilities you didn't even imagine would happen, and it's just a false positive, and you get crushed and go home and cry. So, it's that kind of thing really messes up the scoping aspect of doing the recon. And then on top of that, it's, well, what sources do you want to look at? So, do you want to look at network ownership information? Do you want to look at social media to find employees? Do you want to look at documents that have been leaked onto the web? Do you want to look at the actual websites itself? Maybe a bit of appsack? Maybe you're looking for some misconfigurations, that kind of a thing. So, and there are lots of great tools, as the speaker has just now mentioned, which already exist to help you with all this kind of stuff. But typically, in our experience, these tools focus on the breadth of collection. So, they're sort of the big advantage that they bring, and it is an advantage if you're doing that kind of work, is to look at a huge range of different sources, social media, blacklists, all this kind of stuff, but it can quickly become overwhelming, especially if you're looking at a big size target. So, we struggled with this exact problem ourselves a bunch of times, and after sort of solving the problem, or attempting to solve the problem a whole bunch of times in a kind of manual way, the realisation slowly dawned on us that computers are quite good at automation, and maybe writing a program to do this kind of stuff for you might not be such a bad idea. So, after much banging our heads on keyboards, we created the Orca, and this is our targeted OSINT framework for discovering networks and services related to our targets. It's now on GitHub as of this morning, so please do feel free to get cloned, have a play, see if you like it, give us some feedback, take it for a spin. Specifically, the goals of it are really around making sure that the scope is narrow, spending quite a bit of time on making sure that you can establish the scope accurately, and then also spending quite a bit of time on asset traceability. Now, we had this problem many times ourselves, that we do a whole bunch of recon, we gather all of this information, and we'd find something that was kind of interesting. If you find a vuln, you know, some old misconfigured server that had been abandoned, you know, some Linux box that had been spun up three years ago and abandoned, and you're like, oh, yeah, yeah, yeah, great. You find this thing and you spend a bit of time on it, and then you're like, who does this belong to again? Where does this come from again? How did we find this? Then you'd have to basically redo your whole recon process all over again just to find that piece of information to let you know how to actually make that chain from where you started from to this interesting finding that you had. So, again, after doing that a whole bunch of times and then realising that maybe we should really find a way to automate this, we've baked asset traceability into the orca, we'll go through all of this during this presentation, but then this allows you to know with a pretty high degree of certainty when you find something, the path that you took to get there. So, in general, in the overview here, so what does the orca actually do? It is, there's a whole bunch of stuff around domain discovery. So using both Google and Shodan, so we'll talk about exactly what that is in a second. It does subdomain enumeration as the speakers just now mentioned, very useful, so you take in Google.com and it gives you the subdomains underneath that. And then once you've got these subdomains, you can enumerate them with Shodan to figure out what ports are open, what services are listening, if there's what type or the version type of the software is running on it, and then are there any vulnerabilities associated with it? And if there are any vulnerabilities associated, are there any publicly available exploits available for that? So you can gather all of this information for you and give you an Excel spreadsheet because the whole world runs on Excel, whether you like it or not. But of course you can always dig into the database yourself and you can have a whole bunch of command line output stuff as well. But we found that we spent a lot of time, again, taking the CSV or the SQL output from the database and then trying to make it into something that we could give to somebody else to do the follow up work. So after cursing at Excel for many months, we decided to bite the bullet and write an Excel output so you can get an Excel spreadsheet and you can give it to somebody to run with. So the end goal is to find vulnerable or misconfigured systems. So for us, we found it to be extremely helpful to be very, very specific about that. So we often had trouble before, as I mentioned, with this kind of taking a very, very broad view because you end up with a situation that you get all this stuff and you don't really know why you've got it. And it could be interesting, could it be not, and then you have hard time justifying the impact of your findings to people. So I found this kind of thing, it's kind of cool and they're like, well, yeah, I guess, but why is it interesting? And being able to express it very clearly in that you have found something which is vulnerable, certainly if something is vulnerable to a public exploit or something which is misconfigured, maybe it's revealing information that you wouldn't typically want to reveal to the public. Having this kind of very, very specific end goal, we found that really focused the work that we did and gave us much higher quality results from our recon. So it also helps us to discard unnecessary sources. So if a data source is not helping us find vulnerable or misconfigured systems, we don't use it. And it also ensures that the collection is relevant. So when you are collecting something, you have a high probability that you're getting something useful out of it. Also for recon, I'm sure that everybody here follows the same rules, but we found this sort of way of expressing it to be pretty concise. So no exploitation, no orth bypass, and no DDoS. So don't throw an exploit at something, although obviously you're interested to know if something is vulnerable, but then you have to kind of keep yourself in check at that point, obviously. Breaking authentication, big no-no, and of course dosing people is right out. So keeping those things in mind also helped us with how we should build the tool, decided what things we should put in, what things we should leave out. I mean, I don't know if you've ever played with some of the tools which are out there which say, oh, we're a recon tool, and then you see it's like launching SQL map in the background and like, you know, Armitage, Hail Mary, throwing exploits at things. You're like, oh my God, a control seat. So try and avoid that. So, you know, the orca won't send you to prison, which I think is, you know, a good selling point for any tool. So how do you find the assets which are relevant to the target? So, again, when you're doing this kind of enumeration, you want to be doing enumeration which is going to keep you in scope. So we're going to talk about the different ones that we do, but just to give you that kind of hint, you know, we are ensuring that the enumeration that we do keeps you in scope and ensuring that that is then traced, is then tracked in the database so that you can skip back to it afterwards. So you have a clear lineage between where you started from and then, you know, where you ended up. So we'll talk about all the different enumeration techniques in just a second. So in a nutshell, this is what the orca looks like. It has a very friendly user interface assuming that you love the command line as much as I do. Who doesn't? I mean, come on. So this is what it looks like. You know, it has help, it has contextual help and it gives you, you know, it's pretty self-explanatory and that kind of thing. There is also due to complaints from everybody else in the team. There's actually an example walkthrough on the wiki page, which actually takes you step by step through all the different features that the orca has and can give you a pretty good place to start for like what it's like to use the tool in anger. So where do you start? So there's usually three pieces of information, one of three pieces of information that we start with. So it's either the organization's name, it's either the domain name associated with the organization or a site of prefix. And that's pretty much the main places where you want to start. And then we have all the enumeration functionality following on from that to help you derive the assets you're interested in from these original pieces of information. But usually when we were doing our Ocent Recon engagements, we'd be given just the organization name and somebody would rock up and go, hey, we're interested in this organization, we want to know more. And they say, well, okay. So tell us all that you know about this organization. And they go, yeah, we just did. So you've got to start somewhere. So at least if you've got a name, it gives you something to get going with. A domain name if you're lucky so that you make sure that you're again going off to the right target. There's amazing how much overlap there is on the internet between different names and especially acronyms which you think which I just end up being shared everywhere and everyone uses them all over the place. And site of prefix is again if you're really lucky to give you an idea about the network ranges which were used by a particular organization. So the initial enumeration step, you can do one of these three from those three types of information. So if you've got, for example, using PayPal because they have a really nice bug-ventry scope, you'll see a few of their domains and host names in this one. And also Nmap so they have a really nice set of vulnerable machines on the internet that you can use for recon which is pretty nice, good for testing. So if you're starting, let me run over here. So you've got your name so you've been told you're going off to PayPal. OK, so you can use PayPal, you can use a search engine, so Google or Showdan, you can get PayPal.com out the end of it if everything works out properly. If you're just given the domain name, you can do subdomain enumeration and get something like this out the end of it. And if you've been given a site of prefix, you can then enumerate all the different... Oh, that's close, sorry about that. You can enumerate all of the IP addresses inside of that site of prefix. So in terms of asset traceability, this is where it comes in. So when you're putting in your first piece of asset data into the database, it gets tagged, so PayPal.com gets a number there, and that gets an ID, and that ID flows through when you do the next stage. So even when you've got to the next stage, which is you've found this subdomain or host name, that will get its own ID as well over here. And then that will flow through for later enumeration that you do for other things, network services, bonds, exploits, DNS, that kind of stuff. But the original piece of asset data identifier follows along then with this piece of subdomain enumeration, so that by the time you get to your end state, remember we're looking for vulnerable misconfigured servers, you can find out which piece of asset data it came from. So in some cases, we'd be given, in terms of domains, we'd be given a list of 2,000 domains, 3,000 domains, and you're asked to give us something, give us some cyber threat intel on this stuff. You're like, oh my god. And again, being able to make that asset traceability, having it baked in into the database, means that it's really, really good for stopping you from making horrible mistakes and either reporting false positives back to your customer or to your own organisation, so being able to really keep that under control. I found that to be really helpful. So when we do a search like this, as I mentioned, especially the organisational name, you put it into a search engine, you get back some kind of information, some set of domains, how do you know that the domain that you've discovered is connected to your target? And the answer is you don't. You just can't. There's no way automatically, with a decent false positive rate, you can be really sure that the name of your organisation, of your target organisation, belongs to a certain domain that you get from. A search engine doesn't matter which search engine, Showdown, Google, whatever, Bing, but you just won't know. You have to do additional recon yourself as an operator to figure out if that is actually relevant or not. So when you do that kind of thing with the orca, you can see an example of the syntax here that you can use. If you're familiar with Python click to that kind of text user interface, it's going to prompt you. When you do that initial recon, and it's going to be the same also, not only for searching for organisation names, but also for cider prefixes as well. We'll get on to that in a little bit later. But you have to have that prompting. You have to have that additional manual confirmation step. It's just the ownership information, like domain who is just simply isn't accurate enough. And especially if you're doing something where it's like a related name and there'll probably be somebody called PayPal Burger Company here in Las Vegas, just because there's always something. So you've got to watch out for that. So it's a bit tedious, but it's the only way you can get quality data. Again, it will save you a lot of time later on when you're trying to disambiguate at the end of your process where all the hell this stuff came from. If you did all of that work up front, if you frontload it and you spend your time at the beginning for taking care of this much later down the line, you'll be really grateful that you did that. Just a note on terminology. Subdomain enumeration versus hostname enumeration. Generally used interchangeably, I also use them interchangeably. Not quite the same thing. Hostname would be something like a fully qualified domain name, FQDN, and that would be like the full name where no additional resolution is needed in order for you to use it. A subdomain will be, like in this case, beta sandbox, is a subdomain for paypal.com. Just clarify that in case anybody was wondering. So subdomain enumeration. Pretty standard bit of recon. A couple of different data sets we use for that. Rapid7 for DNS. Great data set, talk about that in just a sec. Civic transparency, pretty good. Combining the two together because they're slightly complementary and they have slightly different timescales associated with them as well. Talk about that as well. We're using DNS dumpster for this public tool release so that if you want to enumerate the Rapid7 data, they provide you with an API that you can use, which is very, very cool. The four DNS data set comes from a whole bunch of different sources as well. You can see the sort of things that they come after. They get PTR records from DNS. They do sweeps of the internet, like with Mascans, ZMAP to get SSL information, so they pull the sand, the CN from that. If they see HTTP responses, zone files, all that kind of stuff. So check out their Wiki page, which breaks it down pretty nicely. It's good to know where the data is coming from as well. You'll see as well when you do this kind of subdomain enumeration using the four DNS data set, certain patterns emerging. One of the patterns you'll see is quite strange, long, complex FQDNs, host names. The reason why that is, is because they're pulling it from the SSL certificates where they will have typically the full name that they're using internally for a system, which is quite interesting and they usually give you some nice breadcrums. So that's certainly something worth looking at. So, certificate transparency logs. Most certificate authorities these days will be writing to the certificate transparency logs when they issue a certificate. Now that, if you're not already aware, is basically a global repository where people record what's being issued. Now it's a way to combat things like fraudulent issuing of certificates, but what it turns out is for recon it's super, super useful because you basically have a real-time stream of certificates being issued. Now at a stage they have a really nice search interface for that, which is very helpful. The log is updated in real-time as well. So if you're lucky you can actually catch a server being provisioned like an SSL certificate being issued for a server before the service is fully provisioned. So it might be the case that this happens before certain security measures are put into place, before a debug is been disabled, before there's changes fully made from staging to production. It's a really nice source to look out for. So that's why the rapid 7, the 4 DNS and the certificate transparency are quite complementary. So I highly recommend using both of those together to get the most bang for your buck from your enumeration. As well as that kind of using sort of third-party dataset enumeration, the awker will also do enumeration of DNS. So you can give it a domain, it'll enumerate the DNS records associated with it, A, quad A, MX, TXT, all that kind of good stuff. But you can also give it for every single hostname. And that's just like a flag that you can set. So that's very, very straightforward. It'll just crunch through all of the FQDNs that you've discovered and check all the DNS records for every single hostname you've discovered. And this is a real goldmine. This is really, really useful. So you get all kinds of interesting things. You see, for example, that which mail providers, cloud mail providers, organisations are using. That's usually really interesting, especially if you're like later on going to be doing a phishing campaign against them. Nice to know what they're using. You can fingerprint their security solutions remotely just with DNS very, very easily, which is brilliant. TXT records will tell you also what third parties that they're using. Because they will have things like the SPF record where they will say, oh yeah, Salesforce, they can send email on our behalf all these different other services that people like to use. And that's really helpful again for any follow-up work you want to do from that. And you can also, another one which I really like is the CNAME, the canonical name. So what you can see is that they'll have something which will be like SSO. SSO.paypal.com and that will be the A record, but the CNAME will be something like paypal-octa.com So you'll be able to see which is the actual service they're using for something, whether you're fingerprinting login like SSO solutions or CDNs or anti-malware spam filtering, all that kind of stuff. Being able to see what's actually being used with actual third parties are being used just from the DNS recon is very, very useful. So that gives you a lot of very good information already about your target just from the DNS resolutions. So the Orca will handle all of that for you, put it all into the database so you can have a look through it later. If you don't want to use the Orca, this is kind of a busy slide but I think Jesse mentioned in the previous chat great, great, great, great tool a wasp of a mass, highly recommended if you're not already using it. The bug bounty people are clearly all over it. You can see all the different sources that it uses absolutely brilliant. You can then get it to give you out just like text output and you can take that text output and import it into the Orca. So if you want to use a mass first because you're more used to it you can then use that and import it into the Orca. It has an import from file command and you're off to the races. So yeah, just wanted to give to have to read all of that just to give you an impression about all the kind of cool data sources that a mass uses. So a big shout out to that. So side arranges, just support the discovery of side arranges currently. So you will have to discover them on your own. But the network ownership information that you can get from the various providers and you can also get it, you know, it's all for free the who is databases you can get for free if you show that you're a security researcher with a valid use case which is pretty good. But you can get like some really nice information so you can see for example in this case with eBay or PayPal or something like this you can see the network range and then you'll see the ownership information so that you can correlate that. Again where it really falls down is cloud providers. So this is something which is like a constant headache really because especially these days many companies that don't even bother owning their own infrastructure so back in the day these sort of more traditional companies still do it but they'd have, well we bought this range of IP addresses and it's registered to us and that's ours and you'll see a lot of companies will use smaller companies will then use just like straight ISP allocations like AT&T or Sprint or something like that so the ownership information if you would take one of their host names or domain names and resolve it you'll get just a generic ISP as the ownership information which doesn't really help you because you don't know which other stuff has been assigned, you don't know if it's shared and so it's a bit of a dead end, a bit of a rabbit hole really and then following on from that a lot of companies nowadays will be using the cloud and you know you've got your azures, Amazon AWS Cloudflag or compute and you end up with all kinds of stuff and again as I mentioned already people will spin up something, use it give it back to the pool and then that IP address where you found that sweet sweet bone will just disappear into the ether and you end up looking at somebody else's catwalking company or something now rather than that financial services company you were looking at and you know it just it's just a mess so we recommend at least just not looking at the cloud stuff unless you're really really sure you typically have to have some other external method of corroboration to make sure that you really got the right piece of information you know you're looking at the right IPs and that they're being used right now and so there's again like the previous step where we talked about having that kind of prompting where you go through the different different steps and you get prompted to see yes this is the one I want to add no yes no yes no you also have to take a similar approach with the cloud stuff and again for these sort of organizations which are pretty much you know cloud native you've got to be pretty careful so you've got to really know what you're looking at so the good thing is these cloud providers as examples do tell you which IP addresses that they use so if you've got the and the orca will give you the network ownership information once you've done an enumeration of an IP address for example so you will see who owns it but you can check as well yourself by just pulling down these different lists of IPs so that's a little bit of our experience so now we've gone through the enumeration of getting hosts so now we've got a bunch of different ways we're going from domains, company name ciders, boiled that down so we've got a bunch of hosts now we're going to look these up in showdown very popular tool which I'm sure everyone is very very familiar with it is pretty impressive you do need a paid key for this but it's very very cheap I don't feel too bad and Mathly does a great job so I don't feel bad about promoting that here so what this does is you give it an IP address and you get back the ports obviously which is pretty handy so if you put in like the scanmeter mapp.org you get back which ports are open which is pretty nice and then you get also back the CPEs the common platform enumeration and we'll talk about that in a little bit more detail in just a sec but as I mentioned the network ownership information the modules which is pretty nice so here you can see on scan me the ports which are open correspond to the modules which are listed here as well which is pretty cool but the really nice thing with showdown is that it will tell you which modules are in use even if the ports are changed so if you're running for example a telnet server on some kind of weird port why not you decided to run it on 3389 just to mess with everybody's heads you'll actually get the module telling you that it's actually a telnet server not an IDP server so that's really really cool and something we make a lot of use out of so that's definitely a pro tip but yeah you get this great information back from showdown which you can then take to the next stage so as I mentioned the CPEs which are really really cool so they are the common platform enumeration and that will give you the vendor the type of software that's in use the version number and here you know even get like the beta flag which is pretty cool so again this is back to our main goal right our goal is vulnerable or miskin's figured servers how do you want to tell something is vulnerable remotely without actually like attempting to shell it over the network which is against our rules of engagement or shell-ing being able to give this fingerprinting information which then allows you to have a reasonable level of confidence about the vulnerability of a particular service is extremely extremely useful so these CPEs are then mapped to CVEs now by showdown itself and we used to have to do all of this stuff manually ourselves with you know a bunch of different tools that we were using and then one day we looked into the showdown helper of the same information that we'd all been looking for or been building ourselves all of that time so that was cool so you can just use the CVE information so the common vulnerabilities at Numeration information from Mitre so you get a really nice list of the vulnerabilities associated with a particular service but we don't want to stop there absolutely not so you get the CPE to CVE mapping from showdown but then you want to know is this really exploitable because so many CVEs don't have a publicly available exploit so the risk is pretty low again you want to be convincing people that your customers or your own organization that we found something which is actually your value which is something they should care about and so we can store all of this information in the database you see all of these you get the CVEs, you get the CVSS scores you get that bit of information about it and we integrate with CVE search which is a great little platform it's a great little database and it gives you basically a web interface to the exploit DB database, a local copy so you don't have to break out to the internet all the time which is pretty handy so the orca will then take the CVEs which you've discovered for all of these hosts that you found and check to see for those CVEs are there any exploits available and will store that exploit information into the database so then you can actually go back and say well we found these vulnerabilities and of this subset of vulnerabilities these ones have a publicly available exploit and you can even filter out for I only care about RCE don't care about DOS, I only care about RCE so that gives you a pretty powerful platform to build on and I say very good way of finding something which you can persuade somebody about so you can footprint quite quickly somebody's entire infrastructure and just have a look at oh well these are the services which have publicly available exploits for them and even if for some reason the exploits maybe don't work or wouldn't work because maybe there's a platform discrepancy or there's something, there's some little issue which is going to set it off it's usually a good place to start looking I mean you know you get something vulnerable to a CVE from 2008 there's probably something else interesting lurking behind the scenes there so CVE search is really cool we find inside of the orcara distribution you'll have the instructions about how to set it up it does take about like 6 hours to build the first time so you have to be a little bit patient unfortunately but once you've got it it's super super useful so we get a lot of mileage out of that so a great little project and really nice to integrate it in with the orca to have that all in one place so now there's an exit back again to the goal we want to find those vulnerable systems so this is what the the orca has like an explore feature where it can give you a sort of a representation of what's in the database without having to do like a whole bunch of sequel queries or without exporting it to Excel first because maybe you just want to look at it quickly on the command line so get a bit of information about the host information about the services it'll tell you like which TPEs it detected from showdown so here you get the vendor, the software version the software type OpenSSH in this case and then the software version which is pretty cool you get the information from showdown that gives you the banner information so you can see what's happening in the banners usually something pretty interesting in the banners tell them that banners are great all kinds of information in there so you can really get a quick overview and again this is a way of supercharging your recon so that you get the information that you care about back as quickly as possible and as being able to get what you need without having to waste too much time going through all of this unnecessary data because again we're tightly focused we're just caring about vulnerable and misconfigured systems so once you've collected all of this information you've done your subdomain enumeration you've done your showdown enumeration your DNS enumeration you've looked up to see if there's any exploits available again if you've got a very large organisation you're going after it's difficult to make sense of it or to get that those nuggets that you care about so the orca has a rules and sort of tagging engine built into it so it's extensible they're just written in JSON you can add your own if you want to and it's a great tags the assets that you discover according to whatever rules you put in so one example is remote access again there's something that we look for a lot when we're doing recon you've got an organisation that's got put in a whole bunch of site it's got a slash 16 slash 8 in some cases you've got a whole bunch of information which has come back you're like well okay it's got lots of web servers but I'm not too interested in those I want something which is more directly useful and remote access is a great place to start VPNs, SSH, RDP all that kind of stuff so the orca can just tag that all up for you so that when you get the output you can just filter for that particular thing for example remote access and you can just see from the 50,000 hosts you've discovered those 300 that are IPsec VPNs SL VPNs, RDP all that kind of stuff so that's something we've found to be very useful as well also for example load balances is called pick up if there's an F5 there for example so if there's a windows operating system as I say there's a whole bunch of different rules in there for tagging up the results that you've got to make it quicker for you to find the things that you really care about so as I say it's extensible so you can always if there's something in particular you're looking at going after you can easily add that in which is pretty handy and so then exporting all enterprise software competes with Microsoft Excel as the saying goes and recon tools are no different it seems so as I said after spending ages having to do this kind of conversion manually I decided to just use the XLSX Writer module for Python so it will spit out this is a very very small snapshot I had trouble trying to get it all onto one slide but basically all of the information which I've talked about so far will be put into the spreadsheet it'll have multiple tabs it's got one for DNS one for vulnerabilities and exploits one for the sort of overview of all of the host information has all of the showdown information so then you can just take that and use sort of standard spreadsheet foo to make sense of it as I said one particularly handy thing is to do the filtering for remote access so you can immediately see oh these are all the remote access machines that this organisation has I can then take that and then map it to for example bridge data and see no is there anything that's got creds for that kind of that system floating around in bridge data which again is a pretty high impact finding which people would probably be very interested in so that is pretty handy so in conclusion the Orca is a tool for sort of supercharging your OSINT for giving you quick access to the things that you care about it's focused it's tightly scoped it doesn't look at any unnecessary data sources lean and mean just the ones that you care about to get your job done so looking at how do you find vulnerable and misconfigured systems it starts with taking in organisational names, domain names and side of prefixes and from there you can discover sub-domain services vons and exploits you can tag up these results give it all to you and export it to Excel and you can also say it has an explore features you can look at it all from the command line which is pretty easy and if you're really desperate you can just go straight into the database and use some Postgres Foo to get out the stuff that you care about it's been released into the wild as of this morning you can get it on GitHub so please take it for a spin there's an example walk through there which goes through step by step all the different features of the Orca how you can use them to find the sort of things that you might be interested whether it's for your bug bounty whether it's for your own organisation or for the lulls so that's all from me I'll be happy to take any questions Thanks a lot