 Welcome back everyone. Today we're going to talk about Velociraptor, an in-point visibility tool. This tool lets you set up a server and a bunch of clients that are on your network and then you can monitor and respond to incidents on each of those clients through a centralized server. So imagine that you had a network with 500 clients. You want to be able to monitor and respond to network events or maybe network attacks. This lets you basically create a centralized server that can very quickly scan through all of those clients, okay? So today we're going to get started with setting up Velociraptor, both the server and the client, and then start to practice with Velociraptor's special features. This setup is not necessarily something that you would want to roll out in production. What I'm doing today is specifically so you can start playing with the features and understanding the Velociraptor language, how to get started thinking about Velociraptor before you actually implement it in your network in a more secure way. Now we have Velociraptor at Velocidex.com is the blog and then we have the Velocidex Velociraptor GitHub repository and this is where all of the current code is going on. It's actively maintained and we're most interested in the releases section. Go ahead and click on that and if we look under the releases, I'm going to use 065-3. So whatever the most recent version is at the time that you're downloading this, I am installing both the server and the client as a Linux system. So I need to get the Velociraptor 0653 or whatever the most recent version is Linux AMD 64. This is for my server and my client are both going to be running Linux. If you have a Windows system, you're probably going to want the Windows AMD 64 with the newest version. If you're on macOS, you probably want Darwin, but you also want the sig file. So whichever binary you download, make sure you also get the file with the same name that ends in dot sig s i g. That's the signature file. So I already have the pre-compiled Velociraptor binary downloaded and I have its corresponding signature file. Now to get started, I'm on my server right now. I have a virtual machine. I call it suspect workstation, but I have a virtual machine that's running as my client and I have the Velociraptor binary, the same binary downloaded to the client. The interesting thing about Velociraptor is that one binary can be both the client or the server. It just depends on how you run it. We're still at our server, so I'm going to open up bash here. So if I do ls, I can see my binary that I've downloaded and I have the signature file that I've downloaded. The first thing I need to do after downloading Velociraptor is to verify that it is the correct binary. So the way we can do that is to do gpg dash dash verify. And then I want to verify the dot sig file. So gpg dash dash verify and then Velociraptor dot s i g. Press enter. And then it'll say assuming sign, so it kind of found that signature was made using RSA key, but I don't have that key and can't check the signature because no public key was found. This is because I have not imported this key yet. So the next thing we can do is copy that key and then I can type gpg dash dash search keys and then just paste that key directly in there and I should be able to find it. And then data source from keys dot open pgp.org is what I have configured. Velociraptor team. So that key looks correct and I would want to go and confirm this and I already have confirmed that that's the correct one. I want to enter the number one and then now I have imported the Velociraptor team's key. So now I can go back to our previous command gpg dash dash verify and then Velociraptor dot sig hit enter and then assuming the sign data is in the binary file. So we're actually checking the binary file using this key good signature and the key is not certified or trusted because you know I'm not 100% sure that's from them but I know whoever owns this key is also the one that signed that. So that basically means that the binary is correct if I trust that key. Now why is that important? Understanding that this binary is the real binary that the developers want you to have is important because you are going to be pushing this out to basically all of the important clients on your network. So you don't want to push out a binary that might be malicious or somebody might have attached a Trojan to or something like that. We need to verify our binaries every single time we download. The next thing I need to do in Linux I don't have permission to actually run this yet. You can kind of tell by the call ring here but if we do LS dash LHA I can see that I have read and write permissions but I don't have execute permissions on the binary. So if I want to run it on my system I need to change the permission to be able to run it. So I'm going to run sudo chmod plus x and that basically means plus execute and then velociraptor binary hit enter give the password and then if I do LS in Linux it turns green so you can kind of see that you have permission to run this already it's executable now. So now we can run it I can just do dot slash velociraptor and I used tab to do tab completion so I don't have to type all of that out and then if I want to check that it works I can use the dash H switch at the end so the binary plus dash H if I can run the help menu that means the binary at least can run and we see that it can run. You see that there's a lot of commands related to velociraptor and this is what took a while for me to get my head around velociraptor has many different commands the one binary can do a lot of different tasks it can be your server it can be your client you can run it just from the command line to do kind of like a query you can run it to make a GUI interface kind of a web interface to deal with and that's what we're going to be doing today you can essentially run everything just from the command line or you can run everything from a user interface I think it's easier to get started using the user interface first understanding how that works and then the command line version makes a lot more sense the most important thing that we need to run is the config and whenever we start the first time we have to generate a new configuration so I have my binary I want to run config and then I want to generate that configuration what that configuration will do is set the configuration for the server and the client for us so we don't have to manually create those configuration files I'm going to run dot slash velociraptor and then config generate dash I the velociraptor binary I want to create a config using config generate and I want it to be interactive so I'm going to use dash I and then they give us this menu welcome to the velociraptor configuration generator I'll be creating a new deployment configuration for you which is very handy what OS will a server be deployed on well our operating system for the server that we are using is linux now if you're using windows or Darwin then you can just hit up or down to select the operating system we're going to be using linux here the path to the data store directory now the data store directory is a really interesting feature of velociraptor instead of using a database they keep everything in directories and then they kind of have small databases around but basically they have a flat file system structure you don't actually have to create a special database to be able to use velociraptor and keep all of its data store if you have a lot of systems in your network you're going to be collecting a lot of data whenever you're doing your hunts are trying to get artifacts from those systems so you want to make sure that you're saving this data to a place with a lot of storage I'm only dealing with one client right now so I'm pretty sure I can handle my computer can handle it so I'm going to be putting it in the opt location but really in a real deployment you'll either be on a cloud service and then you'll have cloud storage so it probably won't be an issue or you'll want to make sure that whatever your directory you're saving your data store to has enough space for all of the artifacts you're going to be collecting I'm going to save it into opt velociraptor because I'm not expecting a large amount of data since I only have one client so that I've set the data store directory to opt velociraptor I'm going to copy that and I'm going to manually create it in another command line so sudo mkdir and then put in the directory exactly the way that it was so now I've created the opt velociraptor directory with root permission so I'm going to go ahead and set my user to own that directory so I don't have access problems later so I'm going to use sudo chown dash r which just means recursive and then my user is joshua and then slash opt velociraptor okay so now I have a new directory created and my user account owns it so I can actually access it so the next option that we have is whether we want to use a self signed ssl certificate automatically provision certificates with let's encrypt or authenticate users with sso sso can be something like google single sign on github single sign on but you actually have to be deploying it out to a publicly accessible system to be able to have callbacks from whatever your sso is let's encrypt is similar it only generates certificates for you if you are have a public domain name so you have to register a public domain name your ip has to be public and then let's encrypt has to connect to your server to actually generate those certificates these are more like real deployments out into your network or out onto the cloud we are going to be using the self signed ssl because we just want to get this running so we can practice with it so just hit enter for self signed ssl and then what's the public dns name of the master front end and for that I'm just going to keep local host which kind of cut off here but you can see local host is the default so the public dns name I'm keeping local host for now front end port to listen on so by default this server will listen on 8000 and that'll be the communication for all the clients will basically talk on port 8002 your server so we're just going to keep the default port for the GUI to listen on and it's 8889 by default I'm going to go ahead and click enter and are you using Google domains dyn dns no GUI username and our email address to authorize I'm going to use just admin again if this is publicly accessible in any way do not use something weak like admin and a weak password because this admin interface will allow anyone to be able to run hunts or even run commands on all of your clients using this interface someone could take over your entire network and run arbitrary commands from this the password for that and then it'll ask you again GUI username or email address to authorize and I'm just going to hit enter because I only want admin and then we have the velociraptor logo generating keys and past the logs directory and by default it's going to be opt velociraptor logs that's okay for our implementation here but I would change that if it was actually production to some sort of log parser so I'm going to go ahead and keep that so hit enter and then I'm going to switch back over to my other command prompt and create a new directory so mkdir slash opt velociraptor and logs just to make sure that those directories are created and I don't have issues later so now inside the opt directory we have one directory called velociraptor and then under velociraptor we have another directory called logs and that's just creating kind of the framework for velociraptor to run next where should I write the server config file and by default it's server dot config dot yaml go ahead and hit enter I'm going to keep that and then it also creates the client config file client config dot yaml so go ahead and hit enter in that directory we have two new files and these are generated from everything that we just selected so this is by far the easiest way to get started is just generating a new config file in the client config yaml we have some information about the velociraptor version and the go version that was used to compile this and we have server URLs and I have local host 8000 for the server URL that is not necessarily true because our client is working on a different IP address so instead of local host for the client I'm going to go ahead and enter the server's IP address for my system this suspect workstation virtual machine is going to be my client and it's going to connect back to my velociraptor server on IP address 172.16.0.37 okay so we have some certificates here this is everything that it's the client is using to be able to connect back to the server and validate we want to make sure that we protect both the client config yaml and the server config yaml because they do have certificate information and connection information the next thing we want to do is copy our client config to our client now there's lots of different ways you can push this out in a production environment you would package up this kind of client and then install it on all of your clients and it would already have its config here I have my suspect workstation I'm just going to drag and drop because this is a virtual machine I can just kind of do that and it'll copy it over and then I want the client config in the same directory just to make it easy to execute so now we have our server config and our velociraptor binary that's been verified we have our client config moved to our client and we have the velociraptor binary also on our client that's all we need to kind of get started so the next thing I'm going to do is actually start the server and I can start the server in lots of different ways but I'm going to start it with the GUI interface and this is probably the easiest way that I found to kind of get started so we have our velociraptor binary and then I want to do dash dash config because I want to tell it to the config that I want to use and the config that I want to use is serverconfig.yaml and then I want to run the graphical user interface so GUI so now if I hit enter it's loading the config file from server config yaml that looks good it's checking the data store starting local services looking for all the artifact plugins all of that is just starting up the local server you notice it also opened up a window in firefox and it says security risk ahead that's because I have a self signed SSL certificate so that could be a type of attack but I know that I just did that so I'm going to go ahead and go to advanced and then go to accept risk and continue do you want to go to the website yes okay so whenever you get everything running you should see the velociraptor logo so if we go to home we have information about our server so server status CPU and memory unislation currently connected clients if you have no clients connected there's not really much to see so let's go ahead and go over to our client workstation and here if I do lslha we can see that the velociraptor client I don't have execute permissions so I have to do the same thing sudo chmod plus x velociraptor turn green so we can basically run it so I'm going to run the same command for the client to start up the the client it's dot slash velociraptor and then for the configuration I want to do dash dash config and then client config.yaml and then instead of saying GUI I want to run client and then dash V what this will do is we have the velociraptor binary we have our config for the client config that basically connects back to our server and we'll basically report back to the server and also look for jobs if it needs to run anything we're running this as a client so it'll basically run in the background it won't pop up a GUI for the client or anything like that and then that dash V just means that it will be verbose in the output so we can actually see what's going on if you don't use dash V for the client then nothing will print out to the screen and you'll just see reports back at the server which is fine but if we're testing we really want to see what's happening on the client so I want to keep that dash V for testing purposes so the first thing that happened is we loaded the client config file but then loading right back from etc velociraptor right back yaml we didn't have permissions to do that because we're not running an as an elevated privilege user or sudo so we're going to clear that out we're going to run that command again with sudo and what this will do is give the velociraptor binary root privileges right so it'll give it top privileges to be able to run all sorts of different commands on the system so this one looks a little bit different how we have loading config from client and then we were able to get right back from yaml starting crypto starting notification service starting query execution so it did run a couple queries and then here's the interesting thing your first client has to enroll itself so whenever it enrolls itself if we go back to the interface we should see this is where it enrolled itself so now we have one client here and we have one connection on the interface because it did successfully enroll itself whenever we go through you would not see any of this if you did not use the dash V or the verbose command for the client so all of this is really useful to be able to make sure that we are actually making a connection to our server that's pretty much all you have to do on the client side we know that this is working if our currently connected clients actually went up so we expect it to be at least one because we do have a connected client how do we actually start interacting with this system and I found this GUI interface to be a little bit confusing because it's designed for you know a 10,000 clients right you should be able to use this interface to filter through 10,000 clients so it's not set up like a normal digital forensics tool where we're looking at one system in depth it's really designed for mass client scanning it is a little bit hard to get used to in my opinion so the first thing that we want to do a client's list is not on the left side like you would normally kind of expect the only thing you can do with clients is actually search clients at the top or there's this drop down arrow where we can show all or a recent host so if we just do show all then we see our Velo client that is the one that we just started our client on okay that's fine and you notice that whenever we select show all we type all into the search menu and it will also show us all so you can either select the down menu or just type all and you'll get every client that you have available now since we see our client we have the client ID we have its host name fully qualified domain name and then its operating system version so it is reporting back as Linux meant 21 we also have labels if you click on the client then we get kind of like a detailed client page specifically the first scene the last scene last scene IP address labels are currently empty the operating system and architecture so not a lot of information but you can add more client metadata if you know about it and these are just any custom values so I could say for example I don't know if I would necessarily do this but user the first science I don't know right you can add any custom metadata that you want to this particular client one of the most important things to do though probably more important than even metadata is to add labels so we can do a lot of different search filters with labels and whenever you get a lot of clients on your network having them labeled helps to organize them so for example I might want to organize something by front desk sales IT executive something something like that and then those labels I can specifically filter out and not query everything in my network to make a label we can just click on this kind of label icon at the top it looks like a tag we can either select a label which we don't have any created yet or add a new label this is for Linux so I'm going to go ahead and give it a label of Linux and add it and I'm also going to give it a label of I don't know we'll call it dev for development okay add it so now we have two labels here right so what I can do is instead of selecting all I can type label and then it actually auto completes what labels I might want to use it could be dev it could be Linux and then I select Linux then this is all of the Linux systems that I have as clients right now think of everything you do in Velociraptor as scale you want to keep everything like very filterable and keep the amount of machines that you're working on as small as possible you can set off a very large job for everything in your network and then you might crash your network because you're requesting just massive amounts of data from every single client think about how you can pair that down to the smallest form possible maybe I only want to run certain things on Linux clients or only for the dev team all right so now we have our clients it's been labeled Linux dev we have some custom metadata here and we can do search searches and filters based on all of this all right let's go through this interface a little bit you notice it kind of went down to the host information section automatically so let's look at interrogate interrogate basically just queries the client for basic information again so if the IP address updated maybe the version number changed or something like that you can see that nothing really updated except last seen at is now updated interrogate basically just gets the information that you see here if you need to update it for some reason just click that next is VFS and this is interesting whenever you want to focus on a specific system so this is the virtual file system we are on a Linux system now so NTFS probably isn't going to help us too much registry is not going to help us these are for windows so we want to click auto and then you see that there's nothing there but if you click the open folder icon this will actually go out and query the system for its directory structure now this isn't just all of the directory structure we don't want to query everything because it's going to take a long time what we want to do maybe is query for example home so the home directory structure we open that now we can query it again and get the next next level or we can do a recursive query and this this is recursive download where we're actually downloading that directory to our server I wouldn't recommend this unless you really know what you're doing and you need that directory what I would do is recursive query and what that will do is go through home and then you see now we have client and then we have everything under client because we searched it recursively so now we can just kind of scroll through here and see for example maybe we're examining known hosts for SSH we might be able to see if there's something interesting there I've clicked on known hosts you can see that we do have timestamps if I click collect from client then it's going to download that file to my local server to the server that I'm using this on and then I can look at the text view and I can look at my hex view I can scroll through using the VFS feature I can scroll through and look at directory structures and then pick specific files out of a remote client now remember this is not on the server this is the server connecting to the client saying give me your directory structure and then reporting that back and then I found a file collected that file from the client system and then downloaded that and now I have a copy on my system locally so if I'm trying to collect files from a system I can bring them in locally and then I can analyze them using any digital forensics tool that I that I want now collected if I click on collected we can see that we've been creating a log of different things we've listed directories listed directory and then download file we basically have a log of all of that information so uploaded files we have the known hosts file and where it's currently located inside our VFS path clients collections uploads auto home client dot ssh known hosts so let's go ahead and open up opt velociraptor and just see what's in here we have our kind of server artifacts our log files we have our config clients and then client idx and then access control lists this is mostly server stuff except clients let's go into clients we have our collections and then we have pea uploads auto home client ssh known hosts okay so we have our known host file that we downloaded this is the full path to it but it's also the full path to it on our server right now so inside here on our local system which is our server we actually have the real known hosts file so whenever you're doing a download from the client you are actually downloading the file from the client to your server now imagine these were each like one gig files and you were collecting that from 500 clients at the same time if you're trying to collect all of that data at the same time you're going to slow down your network and cause some problems so this is what I mean by thinking about scale you're actually downloading those files from your clients consider how the impact that's going to have on your remote network if we go to collected we can basically see what has been collected from the system this is basically an activity log of that particular client here we have the option to quarantine host quarantine host currently only works for windows systems but basically it's running a script whenever you know that something has been infected or infiltrated or whatever it is and you want to quarantine that you can click this button to run your quarantine procedure on that system whatever that procedure is that's up to you I don't have one configured for linux it comes default with one configured for windows but not for linux all right so quarantine just basically runs an automated script think about what your quarantine procedure would be whenever you find an infected system or a hacked system going to overview overview is basically just the first page that we see we have vql drill down and vql is the query language that Velociraptor uses it's kind of like sql except a little bit easier you kind of mix local commands with sql we'll talk more about that in a second and I'll actually show it and then one of the most interesting things I'm logged in as an admin account right now you can also click on shell and then run shell commands for the remote system so if I do if config for bash powershell is for windows bash is basically for linux and then vql is the query language if I run let's say this bash command launch it it's running now on the client and then if I click the eyeball then I can see the results from that so it has two interfaces and its ip address currently is dot 26 and that's what I expect so this is why I say you definitely want to check your binaries don't run this publicly accessible because if anyone can intercept this traffic if anyone can get your admin credentials they can execute remote commands on all of your clients at the same time this is really dangerous but it's also extremely useful for incident response so from the home page I have for example the currently connected clients I've been dealing with Veloclient the entire time that's our remote that's our virtual machine client you can see that sometimes Veloclient looks like it's down I find this is totally normal the next option down is the hunt manager has little cross hairs basically what this does is lets us run a query across all of our connected clients so if we want to create a hunt this is kind of the core of where everything is going on press the plus sign and then the hunt description let's say I want to find that jpeg image on our client so I'm going to do find jpg now this is going to run across all of the clients that I select include condition run everywhere or match by label so for example I could match by label dev or linux let's do linux label so this is where the power of labeling comes in at if I have something like sales and I have a specific attack that's going after our sales department I can just do hunts against our sales department or against our finance team we also have our exclude condition so we can match by label maybe I don't want any of these hunts going through dev make sure you are labeling and it will really help you filter down with your hunts or you can of course do run everywhere or select by operating system so operating system included linux in this case but it really could be all but I'm going to specify linux as the operating system okay so the estimated affected clients are one again this is basically here just to show you how much of an impact you're going to have on the network so if you're planning on downloading files from each of these clients if it's one client it's probably not going to hurt the network too much if it's 10,000 clients and you're downloading a very big file it's going to be a lot of space on your server and it's going to be a lot of bandwidth so one client we'll just go ahead with that and also the expiration time hunts do not finish they keep running until they expire and the idea behind this is if you have a very large network maybe even in different time zones then people are going to be putting clients on and off of the network over time you can basically set this hunt to expire at a specific time and it will try to connect to any client see if the query has already been run for that client and if not then run this query against that client so what this lets you do is you can set this hunt to run for like a week and then what if somebody has been on vacation for an entire week and then they come back the next week well their computer didn't get queried you get all of the clients that you get during the active time and then it will stop you can always reset it and do it again later if you want to next select artifacts artifacts we haven't really talked about yet but this is the core of Velociraptor artifacts are these scripts that do some particular action related to your investigation or data collection or information gathering these scripts are included with Velociraptor but there is a whole artifact exchange where people upload custom scripts that they have run so for example quarantine a lytic host using ip table rules so this is actually a linux remediation quarantine for rule that you could just download and install directly in your Velociraptor rule so this artifact exchange is excellent and very active we have the default artifacts available here and the one that I want to look at is for linux search file finder so the naming convention here is the operating system program that it's related to and then file finder so under that we have linux.ssh.authorized keys and that will download your authorized keys from clients linux search file finder just is a file finder for linux so we'll click on that so we have a performance note here we have some instructions when we would use this thing and then different configurations like how we can actually configure this I find this to be kind of useful but the description I don't see is extremely useful you kind of have to play around to figure it out okay and then we actually have the code that does the thing so sometimes I find the code to be a lot more enlightening than the descriptions itself in a hunt you can select multiple artifacts that you want to collect or analyze at the same time so for example if I also wanted to see linux ssh private keys I can just select that and then they both turn blue and then I can configure both of them how I want to run them during the hunt to deselect them just click it again and then it will turn white and whatever ones are blue are the ones that you've actually selected so next we go to configure parameters now in the search file finder we can't just run it like it just doesn't work directly you need to click it and then you have the option to configure how you want to search or whatever options are for that artifact so what I'm going to do is remove the search files glob table this basically lets you search for several different patterns at the same time just by adding additional pluses and then putting your search patterns here I'm going to close that because I only want to search for one thing and that is slash home slash star star slash star dot shaping now what the heck do all those stars mean we're using something kind of like a regular expression where if you know regular expressions star means match anything and if I do star dash jpeg I'm matching anything that ends with dot jpg so it basically is returning anything with a jpeg extension now what do the other stars mean slash slash home is the home directory that you normally see in linux environments and then star star that means search this area recursively I don't know my client's home user name so for example in my client I'm looking in the home directory and I don't necessarily know my client's name of this folder so I need to use star star to just look in every single folder in this directory and then it will also look in subdirectories as well well what's in the sub directory desktop is there and inside desktop is our evidence dot jpg so I don't know the name of this file I do know the extension but I don't know the name of the client folder like what their username is necessarily so what that search lets me do is search in the home directory of the client and then do a recursive search in the home directory for any name that ends with dot jpg okay that's a little bit about regular expressions here so if you're searching files glob it's just searching for one file if you're searching several files then use the glob table and then just hit the plus sign and then add your searches one line at a time here you can also set up a yara rule if you're using yara if you're more comfortable with that and then we have the option to upload file and that means once I find this image am I going to upload it to the server so we're going to select that because yeah I want to upload that image to the server just to make sure that I'm finding whatever it is that I'm supposed to find and then you know I come from forensic sites so I always want to calculate hashes and then we have some other filters more recent than modified before just using the file system information uh file system time stamps for their modified and more recent exclude paths so proc sys run snap this is um places that we want to exclude in a linux environment normally you don't want to search through proc or files or sys or run or snap actually so yeah local file system only so we don't want to search any like external mounts so I would keep this on local file system only unless you think that it's on an external mount on your client and then one file system so if your client maybe has you know an ext4 main partition and then it has you know ntfs on an external partition do we actually want to go look in the other place if you only want one then you select that if you don't select that it'll just search everything and then do not follow sim links is just specific to linking in linux so here we are searching for anything that basically ends with jpeg inside the home directory we're not using yara rules we will upload any files that we find to the server we will calculate the hash value of any files that we find we don't really have any other filters but we do want to keep it in the local file system only instead of some sort of mapped device so now I can close that and if I had more artifacts they would just list here just click on them to expand them set your settings and then minimize them and keep keep selecting through so next I need to collect specify resources this is the resources of the client that we want to use so if you are running it during the day whenever your client is actually on online and active which most likely they are because their computer is on unless it's a a server some resource you keep on all the time if it is a client and the user is using their work workstation you might want to set it to something low like 10 instead of 100 percent and that way you don't affect your client too much same for io and also maybe max megabits uploaded so you can build in a catch so you don't download huge files from the client by accident and then overload your system overload your network right so maybe one gigabyte is the biggest you want to download if you have the storage for that over all of your different clients next review so this is the configuration for your hunt that you're going to run mine's very simple because I'm just using one artifact to find jpeg files then launch and it will launch in the paused state so whenever it says launch it's not actually launching it you actually need to click on it again you can see a summary of everything that you have here requests clients we haven't done anything yet because we haven't run it and whenever you're satisfied with that hunt over all of the systems that you have selected or filtered for any Linux systems then we can hit the play button are you sure you want to run this hunt now it asks you so many times because it's expecting you to query hundreds of clients so are you sure you want to run this hunt run it and then we can see total scheduled one and finished clients it happened really fast but finished clients was also one okay so next we can go to clients and then we can see the clients that have connected and basically their state whether they've finished or not total bytes sent total rows returned we can see the flow ID for this specific client I'll come back to that in a second but we also have the notebook and under the notebook we have everything that was returned for that hunt so we did find something at home client desktop evidence jpeg this is the one that we wanted to find and the information about the file we have the shot 256 md5 and shot one calculated from that the flow ID and then we also found another jpeg inside uh what looks like mozilla well we did find the jpeg that we're actually interested in here so if I do a search for jpeg I can see inside file home client desktop we have our evidence jpeg that was automatically downloaded from our hunt so now I could work with this file directly I could throw that directory into a analysis tool maybe a sandbox or sandbox or something like that and then I could analyze that file directly from the server or using whatever other tools I wanted to use so that's how we can run hunts over all of our clients and you can do this not just for you know identifying files that are there any logs that are on the system you can analyze contents of files as well so we can actually go into a log file and look for specific things and if we find that then do some other action we could go into the windows registry if we had a window system we could go into the windows registry if we find a particular key in the registry then we could collect that and then do something with it afterwards so it's a really great interesting tool for both detection and response because we could run detection scripts whenever we're hunting we could also respond by basically changing the configurations of any of those clients as we're we're hunting so there's a little bit about hunting using different artifacts next let's look at view artifacts and you can see all of the artifacts that are installed on the right hand side I only have the default artifacts installed you can download a lot more from the artifact exchange let's go ahead and look at one of them linux sys bash history so this will collect bash history from linux systems you just click on it you can see documentation parameters required and then the source code for this particular artifact if we click the pencil icon up here you get an editable version so you can actually change this artifact definition based on whatever you need for your system we can edit these artifacts and we can also create our own we can create a hunt with it directly if you click this target symbol then it will just take that artifact and you can set up a hunt based on that particular artifact directly so next let's look at server events and by default we have a couple of server events these are things that are logged on the server side by default so for example we have artifact modification at the top we have the timeline of things that are going on and we can scroll through this timeline and look at different events that have happened probably the most interesting here is the system hunt creation so we did create the one hunt called find jpeg and then we ran that hunt and then you can see details about that hunt so whenever you have a lot of different users on the system you can use this to make sure that they're not actually abusing particular clients or somebody didn't get access to your server and start running hunts that you didn't expect or maybe even running malicious code so you do want to be monitoring some of this stuff these three are created by default but to create a new server monitor you can click the pencil and paper icon select the artifacts that you do want to monitor on the server side again you can write your own custom artifacts here as well and then you can just select the artifact that you want and then you click configure and then configure the options basically exactly like we did with hunting so the server events are specifically for things you want to log on the server side it uses kind of like a plugin system for artifacts exactly like the hunt manager except it's just for things that are logged on the server side under server artifacts it's the same concept as view artifacts except these are specifically for server logging next notebooks the only thing that i find notebooks really useful for at least for me so far is working on a new artifact so let's call it new artifact description and then select a user if you want other people to use it other people to work on it as well whatever you click on it then it will try to run if we click on that then we can edit it and we can see the original code written in markdown or vql or whatever whatever language we want to use here at a new section that's vql and then we can actually set up our queries here so these notebooks in my mind are a way to set up a new artifact and then you can collaborate with other users in the system to like test the artifact and make sure it's working for writing new artifacts based on what you're searching for specifically and then you can incorporate those artifacts into your hunts or monitoring or server monitoring next we have the host information we already saw this once this is where you find a specific host that you're interested in and it lets you do a little bit more specific things with that individual host remotely next we have our virtual file system this option is available we already have a host selected here we have everything that was collected from the host that was selected so you can kind of see the host's history and what kind of information changed over time other than hunting this is probably the second most important thing so i don't know why they put it at the end this is monitoring so setting up monitoring on the system so to create a new monitor i'm going to select the pencil and paper and then we need to collect the label group that we're going to monitor we can do all but then think about there's different operating systems and different configurations on your network so all most of the time probably doesn't isn't going to make sense something like our tags are going to make sense dev or linux here i'm going to monitor something specifically for linux so i'll click linux and remember those are our tags not the operating system specifically so next select artifacts that we're going to monitor these artifacts are a little bit different than the artifacts for our hunts so for example this is things that we might want to monitor over time whereas a hunt is like the current state of the system right now for example ssh logins or ssh root force is a really interesting thing to be monitoring over time and then those monitors will tell you if you need to be looking into something further so i'm going to go ahead and click ssh login see if i can get some activity here just like before we see the artifact is using barlog off log this is what it's working on on the client this is the query and this is the actual code for that monitor we can do multiple monitors at the same time i'm just going to do the ssh login next configure parameters select it like normal and then we can configure maybe if we're keeping the ssh log or the off log in a different location for some reason then we can point it to where it needs to go and if we want to configure or change the ssh groc query the ssh groc query is how information is going to be parsed out of off log so we might want to change that and then review again we get our setup and then we have launch okay if we look click the binoculars then we can see the raw client monitoring table and basically what needs to happen is the client needs to get this information download it and process it and then run the command so we're setting up a command and control center here and we can see we have our query that we've just added here so it has been added to the monitoring table so notice it took a while for any information to come back and then the client actually had to upload its information to us so now if we click linux events ssh login we can see over time we don't really have any events because i didn't do anything with ssh login but now any client running linux is being monitored for ssh login so whenever i go to monitor i should be able to see any login events from ssh if that monitoring is working properly probably the three most important things that i think you'll use the most are hunts manager the host information once you find a host you want to investigate and client events which is basically monitoring so immediately you could set up some sort of client event monitoring and then as you go along and new attacks are happening on your network then you can write your own custom hunts and implement them at the time that you need them using monitoring as a way to be getting an intel before and after those events are happening and then the host information to drill down into a specific host that you want to investigate possibly take it off of the network quarantine it possibly do some remote acquisition and analysis so now we've basically gone through all of the parts of velociraptor there's a lot to practice here so i hope this at least helps you get started if you want to learn more about artifacts you can go to the velocidex for velociraptor artifacts definitions and then under definitions they have for example linux macOS network reporting server there's a lot more stuff here that's not included in just the default download so i recommend you go through any artifacts you think are going to be interesting to you and then also the artifact exchange have a look through that it's on docs dot velociraptor dot app and i'll put a link to all of these below so do check them out they also have we've seen github they have discord and a mailing list and there is a lot more documentation on docs dot velociraptor dot app specifically under documentation getting help setting everything up the only thing is i find these instructions sometimes a little bit difficult because there's so many options hopefully this video helps solve some of that if you're having any issues with that but there is a lot more that we can do with this remember the way that i showed you to set up velociraptor is not for production it is specifically for testing and understanding how velociraptor works trying some hunts on maybe your local home network and then once you get more familiar with it then you can go to deployment and then look at like cloud deployment and specifically how they recommend provisioning virtual machines there's a lot of things to consider before you actually roll this out but it is definitely a cool tool for monitoring for hunting and for drilling down into your systems i hope this helps get you started thank you so much for watching