 Hello everyone, this is Wes Lambert speaking with you today talking about creating a Velociraptor collector. If you haven't heard of Velociraptor already, it is an excellent in-point visibility tool used for host monitoring, host forensics, lots of lots of different different ways that can be used threat hunting, just a great tool overall for that. So we're going to be talking about how to create that dedicated offline collector so that we can go over two in-points and then collect those forensic artifacts that we're looking to collect. Okay, so without further ado we'll go ahead and get started with that. And just a brief overview here, we'll talk about why we might want to create an offline collector and then we're just going to go through the various steps of creating that collector and then getting the data that we're looking for. So typically we'll create an offline collector whenever, well it could be a couple situations. So we might have isolated networks, networks that we don't have access to from our Velociraptor server that we maintain on a dedicated host. So maybe agents wouldn't be able to communicate with that server per se. Or maybe we can't install local services on those boxes or we don't have the ability to, you know, we don't want to. Or maybe we have customer environments where we don't have architecture, we don't have Velociraptor stood up and we don't have a client server kind of architecture. This is where we would come in and create this offline collector and then we can give it to our incident responders and then have them go run that on the machines in our isolated networks or in those customer environments. So this is usually the main reason why we would want to create this type of collector. Now as far as actually creating the collector, there's going to be a few different steps that we have to go through. So a lot of this is documented very heavily and very well on the Velociraptor doc site here. If you go to, it'll be under triage and acquisition here and then scrolling down to offline collections right here. There's a great section on how we can do that here and I'm going to walk through some of this right now. So we have a Velociraptor server stood up here, just basically one client joined to the server. It's named as VRPRD and what we're going to do here is create that offline collector. So to do that, we're going to click over here on the stack right here, this icon right here and we're going to click what looks like this paper airplane right here and what this is going to do is take us to a wizard that will allow us to collect the artifacts we want to collect, specify what we're looking for and be able to package that up into a single binary to ship off over to our hosts. Now this particular box that I'm looking to target is going to be a Linux host. So I'm going to choose Linux artifacts here, just a few different ones such as the APT sources, packages, maybe I would like some net stat information, see maybe some bash history and a ps list as well. So I've selected all of those and now I can configure the different parameters for that collection if I would like to for each individual artifact. Now going from there, the collection specific configuration which is going to be the overarching collection of these artifacts is going to be right here and this is where we're going to specify our target OS, I want to specify some other details to really guide our collector to where we want to be. So in this instance, since I'm targeting a Linux endpoint, I'm going to choose the Linux option here and I'm not going to specify a password here, but we could if we would like to. There are various report templates that you can create or you can use the default report template. Right now we're not going to generate a report with that, we're just going to keep it the default right here. And then also there are a couple different options here of ways that we can specify our collection or I'm sorry the in format of our collection. So we can just have it be a zip archive that's dumped locally on disk for the particular endpoint. We can shoot it up to a Google cloud bucket or an AWS bucket which is really useful, especially if you're again in that customer environment and you want to be able to send that data back to a central location where you can pick it up and process it after the fact. And then also SFTP is another option that we have here, but for the time being we're going to choose the default zip archive option here. If we wanted to change our binary that we actually use our base binary, we could change that here. This velocity raptor Linux configuration right here. We're not going to do that right now. And then there's some other details here that we could fill out, including the output format of the results. So we could see that maybe we want the results in JSON or maybe we want the results in CSV and JSON for post processing later. Maybe we have some other tools that we want to import that data into and it's only CSV compliant. That's another way that we can specify that format there and be able to post process that data with additional tools. Another helpful thing that we have here is this prefix and what this output prefix is going to do is kind of like what the name suggests is it's going to prefix the name of the collection file with whatever we specify here. So I'm just going to choose BTB 2022 to add that prefix to the collection. And then I can specify some additional resources here in this next tab, but I'm not going to this time. I think all of these defaults will work well for what we're looking to do. And then we can review the different details of the collection that we're creating here. We just scroll down. This is the actual VQL or Velociraptor query language that we'll see here. So if we go over here and click launch, it's going to take a minute. It's going to go through and check and see what kind of collector it needs to build. It's going to go through those steps. And then after that, you'll see the status changes for the server details create collector artifact. And now we see that it's completed. So we can see the results here, basically just some summary details, some hash information for that binary and then also the virtual file store path for that. And if we scroll up, we can see the log and we can see what it did during that collector build and see if there were any issues there. And then we can go to the uploaded files here. This is where we're going to actually download the collector binary. And then we can put that on that endpoint. So what we're going to do here is just click to download this here and we'll see it's downloading. We'll wait for that to finish downloading here in just a second. Let's take just a minute. And then once that's downloaded, we'll SAP that up to our target endpoint. Just a very simple way to copy that there. So I'm going to go to the terminal here, span this just a bit. And I'm actually going to get a tab here. And I'm going to SAP and downloads. Collector say, we'll say, which one was that? Here, it should be this one right here. Collective velocity, the zero, six, five and limits. And we'll just copy that up to our endpoints. Obviously, if you're in an enterprise or in some other type of environment, you might have a better way that you do this, whether it's through PowerShell or through some other deployment machine or mechanism. That's certainly another way to do that. I'm just going to copy this up here to that home directory. And this is going to be our server that we're actually looking to investigate here. We're actually looking to kick off this collector binary on this host. And this host name secret sauce here. Maybe there's some interesting things here that an attacker has gone looking for it. And we're trying to find evidence of that here. So we'll check to see if our collector binary is here. And we do see that it's right here, collective velocity after these zero, six, five, Linux and B64. Okay. And all we have to do here really is just actually have to actually make it executable. And then we'll just do we'll execute the collector here. We need to do this from with elevated permissions. So we're using sudo here. And we just hit enter. And it just goes off and does its thing. And as you can see, this was super fast. We can see kind of up here what artifacts we were going through. These are not very heavy. These will not take very long. But if we're using some more intensive artifacts or we're taking longer to do things, you'll see that it may take a little bit longer here to finish this particular collection and output this. So you see the end report right here, essentially at the end of it. And then if we do an LS here, we'll see the zip file that it created. And you'll see it's prepended with our BTB 2022 collection and then the host name right here. So that's our resultant collection zip file. And if we were to unzip that, we would see all the contents, you know, in this case, Jason formatted the artifacts that we collected. But we want to do something else here. We want to take this and then import this into our server. So what we can do, aside from just going off and reviewing these results, one by one or in some other fashion, post-processing with other tools, we can actually take these and somewhat kind of sneakern at these back into the server, even though we don't have that client-server relationship, that traditional reporting back to the server kind of thing. So what we'll do here is we'll actually S&P this over to the Velociraptor server that we have here. I'll move this over so we can see that. If I remember the EIP address, just put that in the, put that in the home directory here. All right. Whoops. Okay. So we've got to do a sudo on that. All right, cool. So we just needed those ad-bin privs so we could copy that over from the machine. Now, again, you know, typically, if you have a different way that you post-process these results, you might have this already sending to AWS or to GCP. So you might not be doing this manually. So maybe less of a step necessarily for you to have to do that. So we'll go over to this VR PRD server, which is our Velociraptor server here. And let's do a quick listing here. And we'll see that we have the BTB 2022 collection here. Now, one thing that we have to do before we try to import this into the server and review the results is we'll actually want to change the permissions on this file just in case, just to make sure that the Velociraptor server here can have permissions to import this into the data store. So we've done that here. And we'll see Velociraptor, Velociraptor. Okay. We're good there. All right. So now, so far, we've created that collector binary. We've run the collection super fast. You know, we had specified our different Linux artifacts. And now we're going to go import this and do the GUI. So I'm going to minimize this here. And what we can do here is from the server artifact screen, we can search for imports. Here it is import collection. This is the one we want right here. And we can see right here that it's an automated free. I'm sorry. It basically takes a zip archive and then imports it into the server. So we can configure it here and leave the client ID to auto because we'll let it just generate a random client ID. But one thing that we can do just to help differentiate this, or to tag it with a particular name, because again, these are kind of disparate collections. The server has not previously known about this particular box or these results. So I'm going to call this secret sauce for the host name of the box that we we ran that collection on. And then we're just going to specify the path on disk to that collection. And let's get back and grab that file name real quick. Just copy this a little bit easier here. Now, one reason that we don't have an import straight from the UI is because a lot of these artifacts are these collections can be very large, especially if on Windows, maybe you're running the Cape files target artifacts, and maybe you're collecting a ton of different stuff, kind of different things from that host. It can be gigabytes and gigabytes and gigabytes. So it can be really big. So we don't want to force that to the UI. We just let the server pick that up in the file system like so. Now, if we wanted to specify any of this stuff, we could do that here. I'm going to leave it as the defaults. And I'm going to launch this. And it should take just a second here. And we'll see that finished. We can see the log here that it was creating a new client, essentially importing that zip file. And then we can see these different file names essentially that were in that collector binary different JSON files, which we could have reviewed manually, but I wanted to import it into the server to make it easier. And now this is a cool thing. If we actually go over here now to the clients, we'll see that this client isn't necessarily connected, right? Because it's not in that client server architecture. But it does register as a client and we can review the results just like if we had it connected to the server. So if we go over here to the collected screen, we'll see all of those artifacts that we selected to collect as collected underneath this client that was imported. And now we can go back and we can review these different pull it up here. We can review the different results for each one of these artifacts here. And then if I had any files uploaded, we could do the same thing for those. But we can review this stuff. What makes it really awesome is like as if it was connected already to the server. So it's a really, really great way to be able to take that data and import it back into the server and have that same kind of post-processing capability. Maybe even post-process it in a notebook like we do with our other data and bring that back in and just get at what we're looking for. So it helps us that much more as an analyst or as an incident responder if we're able to do that very quickly and very efficiently. So this is definitely a great way to do that. All right. So I'm going to skip through these already, these steps. But as far as the demo and as far as just showing you guys how to create this collector binary, that's really it. It's very simple. It's very quick to be able to create this binary. If you wanted to configure it for AWS, all you would have to do is throw in those bucket details for S3 and it would be good to go and be uploading there. But please, if you have any questions or feedback, please feel free to follow Velocidex. That's the handle for Velociraptor, the folks behind it. If you have any questions of me, please let me know at the real W Lambert or just shout out to Blue Team Village and let us know if you enjoyed this. And if you want to dig in more and really go through the documentation, there is a link here, that offline collections page that I referred to. And again, that is really going to be a great place to go. Aside from this video, it really lays out the different steps that you'll need to keep going with this. So again, please feel free to reach out if you need, have any feedback or any questions. And until then, happy hunting and have a great day. And you're good.