 Hello everyone after packet here. We have a prerecorded video for you. It's an intro to the last raptor. This is about 40 minute presentation. We hope you enjoy. And if you have any questions, please feel free to direct them to the workshop jack one or the recon discord server for more immediate answers. We hope you enjoy to demonstrate how to install the loss raptor. And we'll just. Do a quick guided tour through the velocity raptor features and have a quick introduction to this new defir tool. I mean, you might have heard of a loss raptor before and in this video, we will show how to install it and we'll just do a quick demonstration. So I'm going to start off by, you can see my desktop with this typical windows system and I'm going to show you how to install it from scratch in a few minutes. And then we will look at how to actually use it in real defir work. So the first thing that I'll do is I'm just going to go to the. To the GitHub page and I'm just going to search for. GitHub page for the loss rap and download all the the releases. I'm going to show you how to install the loss raptor in the clouds on cloud platform. So the first thing I'll do is I'll download the releases I'm going to have a server, which will run on Linux, so I'm going to need to download the Linux binary. I'm also going to run the windows executable on windows. So I'm going to need to download the windows binary and also finally I'm going to need to download the source code so I can show you guys how to build an MSI installer. So once I download these binaries, I'm going to open a new shell. And I'm just going to write it as administrator. So I can install the relevant MSI's and since I just downloaded it into the downloads directory. I'm just going to change directory to that. Oops, like downloads. And if I do a dir, you can see the the binaries that I've just downloaded. Okay, great. So the first thing that we need to do is create a configuration file in which we can deploy on the server. I'm going to create the configuration file on the windows machine and then I'm going to push it out into the server, create a devian package and push it to a devian server in the cloud. So first thing I will do is I will generate a configuration file using the configuration wizard. I'm going to use the dash i flag, which is the interactive wizard, which will help me generate the configuration. So it goes off and it will ask me some questions about what exactly what type of deployment I want to use and it will actually create configuration for that. I'll start off. I'm going to be running it on a devian machine. So I'll choose a Linux server file based data store is usually the simplest and this is the directory that I'm going to store all the files in. In this particular example, we're going to create a let's encrypt certificate automatically. So it will create an SSL certificate without any further intervention. And we'll also authenticate with you standard usernames and passwords. So this is the second option and I'm going to use one of my VM machines. Training VMs And this will be the main name for the training them. So because we want to create a DNA SSL certificate. We're going to need DNS To be properly set up. So we're going to use a real DNS name. And the loss raptors is really useful feature will actually update done dynamic DNS by itself without us doing anything at all. So if we use Google domains for the time DNS and it will be able to go out and update it. And over here I've got the credentials For that, which you can get from the dynamic DNS settings of the Google domains. So I'm just, I just literally copy them into the Into the prompts here. And then it will ask me to create a new user. This is the initial administrative user that will be created when the service installs. So I'm going to just create a new user And give it a password here. And then just press enter. It will generate some keys, finish configuring it and you can see that it created a server config and a client config. I have a look at the files. Then we have the server configuration and then the client configuration, which contain the keys, the key material and all the configuration. So the next step is simply to create a server. Debian package. Using that configuration. So we're going to run the Velociraptor again and tell it to use the server configuration and Debian server. So we're just asking it to create a Debian package for the server. If we just tried to do this now, then it will say, I don't actually have a Linux binary. So you're running a Windows binary and you're asking me to to build a Debian package. So I need the Linux binary as well. So in this scenario, we're going to need to give it the binary, the Linux binary to actually package as well. So we do that and it will actually build us at that package. Let's have a look. And we can see that there is a Debian package here that contains the configuration embedded with it and all the startup scripts and it's all all the service configuration. So all we have to really do is now just push that to a cloud endpoint and and then start it. So I'm just going to SCP it to my cloud machine. And over here, I've got my IP address or my cloud machine. And, and we'll just download a copy of the Debian package over there. So now I will SSH to that machine and install it. Okay. And if we have a look at my home directory, there is the Debian package there. So I just installed it with the package and it's going ahead and install the service and created it. So it's already pre-configured to start and it should just work to bring up the service. I can check the services up service. That is and it shows me green active running and it's all ready to go. So now if I go to that with HTTPS vm1.training.velocerex.com then I should be able to log in with the username and password that I gave it before in the configuration file. So now we have a Velociraptor server set up. It only takes a minute. You can see down here the version and this is the GUI. But this is great. So that's the server, but we still don't have any clients attached to it because we haven't really deployed the client side of it. So now I'm going to show you how to create an MSI for being able to deploy on the Windows platform. So let me get out of there. So we'll go back to our downloads that we had here and to build an MSI we use a tool called Wix which is a typical MSI framework build framework. And it's very easy to do. Let me just open the source code which to be downloaded from the GitHub page and in the docs directory you will find a directory called Wix. And this directory contains all the scripts required to use Wix to build the MSI. So I'm just going to paste it and just extract that one directory into the downloads directory. Okay, so I can see it here. And let me go into it. And I'll just quickly show you guys all the scripts here. We have a number of different scripts. And what we want to do is we want to build in this case a customized version of our MSI that already embeds our special configuration. So that MSI can be used to then deploy using any number of software management tools like group policy or SCCM or anything like that. So in order for these scripts to work, I need to create a directory called Output and copy into it the binary that I want to deploy, which would be the Windows binary output and call it the losseraptor.exe. Okay, additionally, I want to copy the client configuration into the output directory as well. So now I have two files in the output directory and these two files will be packaged. That's really all it is that's required to be packaged in the MSI. So I'm just going to run the build custom script to create the custom MSI for me. And all it does is it simply runs the week's tool set with the right scripts and options to just build the MSI. It's going to take a second and it will just create an MSI that I can then use. Now if I have a look, there is custom MSI here ready for me to deploy. So I'm just going to install it on my machine here and it will just install the losseraptor and it's ready to go. Okay, so that's all it took. And if we have a look at it over here and we should be able to see the client already checking in. So it's installed it on this machine and checked into that Zebbian machine in the cloud that's running the server. So that is really all that's required to deploy the losseraptor client and server. So now I would actually just take that MSI and deploy it across the fleet and get all the endpoints. So that's great. So we've actually managed to install the losseraptor in a few minutes with a proper SSL certificate, automatic dyn DNS setup and it's ready to go. We've seen all the clients and now what can we do with these clients? So let me just give you guys a quick introduction to the UI. So when you first go to the losseraptor UI, you will see the dashboard over here on the left side. You can see the number of different options over different screens of the UI. The home screen is just a dashboard that tells you a little bit about the server. You can see the CPU memory utilization of the server. You can see how many connected clients there are. And over here you can see all the GUI users that are currently configured and what kind of roles they have. You can see that I only created one user called administrator. I can create other ones. And then over here we can see there is a server version with the server, the version of the server that's currently running. So that's great. Now normally when I would use the losseraptor, I might want to look at a specific endpoint. So I would usually search for it and over here there's a search box. I can use the hostname. You can see that it has some kind of completion going on here. Or I can actually label machines. You can see I can tick this machine and create a label. Let's say I call it test. The label can be anything, but it simply adds the label to that machine. So now it's called test. Now I can actually search for label test. And labels are useful later on when we do hunting and so on. We can select specific labels. So once we search for a machine that we want to look at in more detail, we can just click on it. And this is just an overview screen to show us what this machine is all about. This is a Windows server data center machine. And then we've got some basic information about it. In this screen probably the most important thing is the agent version. And when it was last seen in the last IP address that we've seen coming from that machine. If we click over to the drill down page, then we can see more details about it. About the machine. You can see more information about the platform and all that sort of stuff. But this is probably more interesting thing that shows us the footprint of the agent on the endpoint. So we can see that it takes around 35 megabytes of memory. And the CPU load is virtually zero because the agent's not really doing anything. So when we do some heavy hunting and we might actually do a lot more work on the machine on the endpoint, then it's useful to see how much memory we're actually using and how much impact we're having on the endpoint. And then this particular one shows us also the users that exist on the endpoints. But this is just kind of an overview information about it. We always collect telemetry so you can always see the most recent telemetry from the endpoint about footprint and so on. The next tab over shows us the interactive shell. This is an easy UI that allows us to just create, just run shell commands on the machine. We try generally not to run too many shell commands because they can be unpredictable, but it's possible to just break into a shell. Over here we've got a number of different options. We can run a shell commands through a PowerShell or CommandShell or Bash. Usually I would prefer to run with PowerShell because it's just a little bit more reliable with regards to escaping quotes and stuff like that. So here's an example. IP config slash all. Examples will show us everything about the interface. You can see that over here this is the time and who ran the command. So when you have multiple users you can see who actually issued this shell command. And over here we have kind of a little bit of a UI here that kind of hides the results a little bit. When you have many shell commands then it's easy to see. It's really easy to lose it so it allows you kind of to hide them a little bit. And this one just kind of hides them inside of a kind of indented shell. So it's just the UI kind of feature. But we can essentially type something else, google.com, right, and then run it. And you'll see that this Spina will just wait while the command is running. And then when they complete we will see the results. So we can see the results. This is kind of handy when you have to do interactive, interactively look at the machine. The next one over is the VFS. The VFS is a way for us to interactively examine the machine. The machine's file system. So over here we have these are accesses, different types of things that we can look at. And what we're looking at here is really just the server's cache of what the file system sort of looks like. But it isn't just checking the file system on the endpoint, it's just what it already knows about. So when you navigate to a directory that hasn't been listed before, then it doesn't know about it. Then it tells you, you know, data available for this directory. So initially there will be no data available. So you can click on this button here to refresh the directory. And what it will do is it will just issue a directory refresh query to the endpoint and just list the directory and then refresh the server's view of it. So you can sort of like go through and click and refresh, click and refresh. Or you can simply just, you know, do this recursive refresh. And it just goes off and it takes a little bit longer, but it just recursively refreshes all the subdirectories. So you can, you know, kind of look through them without having to click all the time. So this is a bit of a UI thing. Okay, and for example, if we have a look at our files, we can have, for instance, you would know that there is an NT user to that. In the user's home directory. And we don't actually have the data. We just know about it. So we listed the directory. And if we actually want to see that file, then we can fetch that file specifically from the endpoint. So we can just click on this thing and it goes off and fetches it. And then we'll have a little icon here, which is, you know, just indicates that the server has a copy of that file now. So we're able to see it from the server and we can also download it and stuff and things like that. And for those of you who know, we would have seen that typically the NT user to that is locked when the user is logged in, but also after we'll fall back to row and if it's parsing in case it's locked. So it will actually retrieve the data, even, even if it's locked and you can see the hex for you and so on. So that's, that's essentially the VFS view, which is the virtual file system. And it's just, just a view over in the NTFS thing, we can actually see what it looks like in an empty enrollment in FES parser. So we can see all those hidden files like the MFT and we can collect the MFT in exactly the same way as we did before from the server just by pressing this button. And if these somewhat, somewhat larger, so it's going to take a little bit longer. Okay, so the question is what, what are we doing when we're pressing these blue buttons, then we can, we can essentially just look at it in the same way and download it and so forth. So you might wonder what is actually happening here with the Velociraptor endpoints. Well, Velociraptor is really a VQL engine. It just runs these query languages. The VQL is the query language that runs the entire Velociraptor architecture. When we click on the UI, what we're doing is we're issuing VQL queries on the endpoints. We're just issuing different kinds of queries. And so if we, if we go back to our overview here, then we have the collected tab. This collected tab shows us all the artifacts that we've collected. So an artifact is simply a type, it's simply a VQL query. Right? So every time we click through the UI, behind the scenes, the UI simply issues a different kind of artifact to collect from the endpoints. In other words, it just simply issues different kinds of VQL queries. And you can see that by simply looking at the request tab. The request tab is showing us all the raw requests that is going out to each endpoint. And we can see over here, queries. This is the VQL queries that are going out. So really, whenever we talk to the endpoint, all we're doing is sending VQL queries to the endpoint. And that's basically all it is. So for instance, here, when we ran the PowerShell, we will be looking at different PowerShell and we can see, you know, we sent a command that was the query and so on. Let's have a look. So when the query, the query is going to the endpoint, then it will execute on the endpoint. And the next tab that's interesting is the query log. So as the query is executing on the endpoint, the endpoint is logging some interesting things about the query. So like errors or anything like that. So we can see over here that as the query is running, we get like logs. And then finally, it tells us how many rows the query is returning because ultimately, all queries just return rows because, you know, it's just a query. So the results tab simply shows us the rows that are returned for each query. In this case, when we ran the PowerShell, it just returns a row and has a column called STD out and then STD error and the return code. Over here, when we listed the directory, you know, it has a bigger query and it returns more rows and more columns, right? But ultimately, it's still the same. It's just a table. Everything we do is just a table that's returning. All the queries are simply, simply tables. So we looked at the VFS UI and it's kind of a nice UI. It's intuitive. And it's kind of what we used to when we, when we look at, you know, Windows Explorer or something like that, people quite understand the sort of file system hierarchy. But we see that it actually just creates artifact collections in the background, but we can actually just collect other artifacts. So if we click on this plus button here, we will collect a new artifact. And over here, we have a search screen so we can search for a number of different kind of artifacts. For example, a prefetch artifact. So as you know, prefetch is a set of files that are stored in the Windows prefetch folder and they maintain a list of information about executables that are typically run. But one of the cool things about them is that they actually create, maintain a timeline of each executable when it was run. So it's often useful to build a timeline based on the prefetch information because then we will be able to, you know, pinpoint when a particular binary was run, which might be useful for a TFR investigation. So, so we can choose the prefetch artifact. And this is, and we can see that the artifact gives us a bit of an explanation about what it is that it's doing. A little bit of background information, then it can take a bunch of parameters. And over here we can see this is the PQL source that's actually going to run. So we can look at it, but we don't really need to kind of understand it too much. If we want to collect this, we simply click add. And then we will fold that top pane up so that will give us access to all the parameters that we can configure. And the parameters are actually then used to control the query, the VQL query, or by the artifact. But you know, here we can, this particular artifact allows us to filter by timestamps or, you know, or binary regular expressions or other things like this. So, but we're just going to just run it, but usually the defaults do kind of the right thing. So when we run it, it will immediately issue that collection and collect that from the endpoint. So it basically goes off and builds a timeline from the pre-patch data. So every time we run a binary, it has some timing here. So this is really great. We basically can see what the results are. Another very interesting one is to look for scheduled tasks. Because that's a pretty common persistence mechanism. And then all we have to do is simply click add and then just launch it. And off we go, it just, it goes off and calculates it. And take a couple of seconds and do that. And then we have the results here. So this is basically how we would collect a bunch of information. So, so far, it was kind of nice. We collected specific information from the endpoint about pre-patch, about task scheduler, etc. What if we wanted to collect it from all our machines at once? We have, I mean, in this case, we only have one machine connected to this deployment, but typically we might have, say, 10,000 endpoints connected to the deployment. When we want to collect the same artifact from many machines at once, we call this a hunt. And over here we have the hunt manager, which is exactly the same idea, except it just automatically collects it from all the machines that are connected at once. So we have a similar GUI, very similar UI to before, right, except now if we're doing it in a hunt. So let's just say that we wanted to find all the information about running processes. So we wanted to use PSList as a hunt. So we just choose Add, and again, we can configure it somehow and see that we can do a regular expression for which processes we want to look at and so on. But this time, hunt has an expiry. What this means is that once we start this hunt, it will just continue running until the expiry time. And any time a new machine comes back online and connects to our endpoint, it will pick that collection up and it will just run it. So we don't have to catch the machine when it's on. Basically, we just schedule it and it will just run it when the machine comes online in its convenience. We click Next. We can specify description, process, listing, let's say. And here we can choose to run it everywhere or just choose the right labels. If you recall before, we created the label before, we'll test. We can simply restrict it to all those machines that have that label or run everywhere. And run everywhere will assign the hunt to all the machines. So we click Next. Now it's showing us this is going to be the request. This is what we're going to be sending. Looks good. Let's go. When we create the hunt, it's created in a paused state. So it's not actually running yet. We click Start to actually start it. And off it goes. As soon as we click Start, it schedules it to all the machines that are currently connected. So far, it's only got one machine. And it's finished. So as soon as the machines will finish, it essentially creates that. We can have a look at all the clients that are connected. There's really only one in this case, but there could be many more. The statuses, status tab shows us if there's any errors. And over here we can see the results of the hunt. So over here we can see this is a process listing of this machine. If we wanted to process this data using another tool, then we simply need to prepare a download over here. And we'll create a zip file that contains all of the information in this hunt from all the machines. Let's take a look at what does the zip file look like. This is only going to be a small one because there's really only the one machine. But we're going to see a CSV and a JSON file of all of the results combined from all the machines. And as well as that, we actually have the collections split up for each machine. These are the logs. And these are the artifacts that we ended up collecting as part of this hunt. So that sort of split up individually and combined. And also if any of these hunts collect files, then we will be able to get that in the zip as well. So this is just a way that we can use to export data. Now over here, you can see that it's still kind of in the running state. And some people are confused by that, but this is just again a reminder that hunts always run, they simply expire. So when we started till the expiry time, for that time period, it will be active and then it will become stopped by itself. And it will stop means that it no longer accepts new clients that come online at that time. So that's basically hunts. Okay, so that's pretty cool. We can see how we can export the data and use it in another tool. Sometimes we really want to be able to post process the data within the tool within Floss Raptor. So the next feature I wanted to show you guys is the notebook feature. And notebook is basically like a collaborative analyst tool that we can use to analyze the results from a particular investigation, put them together and post process them. So I'm just going to show you quickly how I would build a notebook. So I would click this plus button here, which creates a new notebook. And I can call it whatever I want, like test notebook, for example, give it some description. And here I can choose collaborators that I would like to collaborate. So it's like a collaborating, you know, Google Doc or something like that. It's very pretty familiar concept. When I submit it, it will create a new notebook over here. And you'll see that down here we have the title will be copied into here. So that over here, we can see all the notebooks that I currently have. And when I click over here, I have to actually click on it on the title itself to bring it into a focus and I will see that it creates this is called a cell. So a notebook is consisting of different cells. And over here, again, this is called a mockdown cells, you can see type is mockdown. So I can simply use just describe this and normally I would put like some background for the investigation. Demo cell for demo. And I actually can also copy paste copy image, take a screenshot, I can actually paste the image in here. And this is just mockdown if you're familiar with the mockdown is very, very familiar. When we save it, it simply renders it so it's just a mockdown here where I can put some notes. So that's not the most interesting thing. But let's say that I wanted to do more interesting post processing. Well, I can create another cell and edit from a particular flow or hunt that I've created previously. So from a particular collection. So let's just add cell from flow, choose the client. And these are the flows that ran before right so here I've got my task schedule. So all this does is it creates VQL, it pre fills the VQL for me with the right cell IDs and all this sort of stuff. This is simply a VQL query that just post processes the information that we've already collected so it doesn't go out to the client and collect the information again. It just post processes it. It's useful. In order for us to see to be able to post process it and filter it and so on. So for instance, in this particular case we're looking at task schedule. And you know maybe a particular malicious tasks would have maybe PowerShell we wanted to know which tasks are running PowerShell. So this one will return all the tasks. And now we would like to filter it where commands. And this is the regular expression operator. Matches PowerShell. Okay, and if I press save, okay there's no PowerShell here. Okay, so we have Google update or whatever. So you can basically run DLL is a common, a common sometimes use maliciously sometimes legitimately. This machine is not actually owned but is not compromised but so you can see it. So here for example we can filter by run DLL see all the commands that have run DLL. So this allows us to do these post processing kind of operations. We can do the same thing at cell from hunt. And looking at our process listing hunt. Same thing. It simply creates the table for us that query for us initially right. So for instance we can say where name matches Velociraptor. To see all the Velociraptor processes that are running. You can see there it is. The Velociraptor process is running. That's the hashes. That's the authentic code information. Is it signed? Yes it is. And so on. So we can get these kind of high level information. And the user name is running it as a system. So this is a pretty cool way of post processing the different things. So I'm not going to get into too much into the artifacts and how to actually write the QL. But as a process to say that in this screen here we can view all the different artifacts. And let's just reiterate again what an artifact is. As I said Velociraptor simply runs VQL. But if the UI simply required you to type a new query each time. And that would be very tedious and error prone. And it would just not be a very good user experience. So with Velociraptor we have a concept of artifacts. An artifact is simply a YAML file that contains the descriptions behind the particular VQL and the VQL itself. So let's have a look at something like let's say the DNS cache. This is an example of an artifact. And you can see that here in this screen we can view all the different artifacts and we can actually edit them as well. So this particular artifact, you can see that it has some VQL here. It just queries DNS cache on a Windows system. So we have some explanation of what it does and then it just does it right. So let's have a look at what does it look like in, we said it's a YAML file. So if we click the pencil icon here it's going to be, allows us to edit it a little bit. And so you can see that we can customize using this UI. We can customize the artifact. So we can simply add in here the different things. So we could say, oh okay, maybe we wanted to add a filter or something else. We can simply add, we can change this as required or write our own. And so we can go over here for instance. And let's open the new tab, collected cache. Yeah, let's collect the DNS cache again. And it's very quick to do that. As soon as we ask to collect it, it will essentially do it instantly. It goes out to the endpoint and collects it. And we can see all the different DNS names that are in cache at the moment. So hopefully this was a quick short introduction to Velociraptor. We went through the installation process which only took a couple of minutes. And then we went through some of the things that we can do with it. Again, the sky is the limit really. We can collect different things and we would normally respond to the different incidents and collect different artifacts all the time. But it's just a very powerful tool that allows us to gain unprecedented visibility of the endpoints that we really can't get with other tools currently. Okay, thanks for watching. And yeah, if you're interested in Velociraptor, have a look. Go to the GitHub and download it. It's open source and also contribute and join our community. Thanks again for listening. Thanks.