 Welcome! My presentation is about incident response and how my personal perception on that particular topic might be of interest for you. So, what am I going to talk about? Just a brief overview. What are the challenges? What have been the solved challenges so far? What are the dos and don'ts during an investigation? And what can you do to be prepared in case of an investigation? So, imagine you're working in an IT department, mid-sized company, a few hundred people, and all of a sudden one of the boxes starts blinking or you get a notification, you get an alert. What can you do to be prepared to dig into the roots for this cause, to figure out what's going on? And last part is about personal research project of mine. Together with colleagues we built something really cool, I think, and I'm happy to talk about that at the end. So, who am I? My name is Martin Schweinecke. I'm a researcher at SBA Research. Maybe a few words about SBA. We are a publicly funded research center which gets kind of an incentive to work in security and to work with companies. So, it's kind of like a very, very small Fraunhofer. We have around a hundred people working there and in contrast to Fraunhofer we focus exclusively on digital security and everything that's related with it. So, we have, I don't know, network testing, we have pen testers, we have combinatorial testing and so forth. So, what's my background? I did a PhD in digital forensics and I'm still working in that area, which I find very interesting and I'm also interested in online privacy, everything that's related to Tor or can be seen as a countermeasure to digital forensics. I'm also on Twitter if you have any questions and would like to ping me after the talk with questions. So, what are the goals for this talk? Why am I here? What I'm going to talk about is a brief introduction to incident response. What are particular steps that can be taken in case of a possible breach or data loss or ransomware or things like this? I want to talk about current challenges, what are the problems right now for forensic investigations and what are the things that actually work? So, usually if you have a problem, you Google it, you have like a ton of forum posts with people with like problems and you never quite know what are the things that actually worked and at least I hate that if I have a conflict problem and I see 20,000 entries and they all point nowhere. And also I want to talk about how things can blow up in your face. So, I'm an academic, I come from the Technical University of Vienna and apparently reality is kind of different on the outside of the academic world. So, what is incident response? Basically and probably everybody knows this in here, companies fail to detect intrusions in time. You have Ashley Madison, you have Hacking Team RSA, they all have been hacked and this has been publicly reported that they have been hacked. Google got hacked big time when they lost part of their source code for Gmail simply because someone wanted to read some person's emails. And there are studies that try to estimate the time of breach and the time of discovery of the breach but they usually leave out a large portion of unreported incidents. So, just recently there was a report by the RAND Corporation where they said that a zero day is undetected for 180 days as well as breaches which is okay if they did their number crunching correctly but of course there's always the possibility of a dark area where nobody knows what's going on. So, what the press is trying to tell you what incident response is is that there's APTs everywhere, APT, APT, APT. Everybody nowadays isn't cool if he didn't get at least attacked twice with an APT. Mostly what an APT right now is you have a spearfishing email with an office document attached and you have to enable macros and then you're in the network. So, this is not the particular advanced persistent threat that the press would like us to believe that's actually happening. And usually at the beginning of an investigation you have an initial report. So, somebody notices something, somebody says that's weird or you get an automatic alert from some of the appliances that you have stacked in your server room and the root cause for incidents, why you do incident response is to figure out what's going on, how did that happen and how can I prevent this and future incidents from happening again. Yeah, and usually in the beginning of an investigation it's great if you can assemble a team and if you have dedicated specialists which are ready to work with you but usually that's either you or someone else who is known for being into security and who is tasked to have a look at that. So, of course, goals, what do you want to achieve? You want to have some form of reaction. You want to have people looking into that. If you have an intruder in your network you want to have that intruder kicked out and you want to contain it if it's still in the network. And, of course, you want to close any entry doors that may be open so you need to figure out how the attacker got in and what steps he has taken inside of the network. So, this is where my area comes in. Usually it's something like life forensics under time pressure so you have someone in your network and you know that there is somebody but you need to figure out where to lock, where to find traces that are left behind on which machines the attacker has been and so forth. And, ideally, of course, you move faster than the attacker so you can prevent further exploitation and also you can work remotely because nobody likes to jog around office buildings trying to figure out where this particular machine is that might or might not have been compromised to do further analysis. So, of course, there's always time pressure if the management would have an answer by yesterday. That's really great. Usually if you do not have any public exposure so if you do not have any news entries, postings, anything which relates to that you might or might not have been breached, that's okay. But if the press is involved and the public is knowledgeable that something is going on, then it's rather stressful. There was Petia just a few weeks ago. There was the shipping company Maersk. I don't know how to pronounce it. What they did was to have a dedicated page just to keep people informed on the ongoing investigation what they're doing, what they're trying and so forth. So the public part of incident responses aside from the technical part also very important. So what is the context of academia in this area? Because usually you want to have a techie, you want to know someone who can work on the wire and you want someone from the ivory tower to look into your problems and to dig into your networks. Academia usually is something like the ivory tower in a sense that they simulate, they have their lab set up, they try to make things work and prove that an idea is valid and yet they cannot operate in a dirty environment, a natural-grown network just alike. This can be very tricky. However, what academia has the benefit is that it's rather watchful, probably not like Sauron and every professor who feels offended by this, I'm sorry, but usually you got the opportunity in academia to browse ideas and to discuss them widely instead of within a confined public group which is very beneficial in such scenarios. So another problem of science or why academia can be challenging in this field is that you always have the conflict science versus engineering. So usually many forensic investigations, they're pure investigations without any academic value or without any engineering value. But then you have these particular problems that academia likes to focus on which are really hard and scientists want to figure out how to improve the status quo, how to make things better or get better insights. And the tricky part because forensics or incident response is really applied is to convince an academic review that this is something novel and this is something publishable because the field in security in academia is really, really fast. Usually you can find a paper weeks before its publication online simply because then the authors have the benefit of spreading their ideas faster than their competitors, so to say. So the reviewer is then in a tough position because he's not into the idea but usually those practical papers they have a thorough evaluation and they have some points which are valid. So this is really tricky because it's new in the context of security, maybe not so new in the area of system security and in particular in totally different research areas like medicine or psychology, this is well established. So you have studies, you have their layout for how to conduct proper science and to how to conduct studies. But in this particular area this is rather challenging. Another example is that many things in computer security are rather narratives. So they are based either on marketing material or on people selling stuff and there's not much of an evidence-based science going on here. So if you have a new drug, you test it, you have your statistical tests at the end and then you can conclude whether it works or not. But it's not the same thing for digital security because if you have a new idea and even the most simple ones like is it beneficial to have an antivirus scanner in your network are very hard to answer because of course having antivirus is what everybody thinks. Yes, of course you need antivirus in particular on Windows but it's really hard to prove it from an academic point of view in a real environment whether or not this is actually beneficial. Also just at the last TCC, Bruno had an excellent talk on that topic and if you're interested in that, go have a look. So one of the benefits of academia we have creativity, we have independence and we have plenty of minions that can do work with us and problem is that we do not have that much money so compared the budgets of larger corporations with academia are so cheap compared to that. It's astonishing. So if you buy a consultant for, I don't know, a week this is almost a salary for half a year for a student. And in particular, everything from software licenses, funky equipment, all those things do not work that good in academia because they're mostly very expensive. So still, when I started in this area and I started with my PhD years and years ago there have been just a few publications that were generally perceived as standards. So one was RFC 3227 is remarkable because it dictates the order of volatility. So the basic idea is you have a breach you're trying to collect the things that vanish first as well as first. Then there was NIST Special Publication 886 from 2006. Again, trying to figure out standardized procedures how to do incident response. They also specify that you have to use a right blocker so if you have a hard drive and you want to investigate it, you have to use a proper blocking tool that prevents modifications of disk. Still, even though those standards are already really old, they have all the principles in them that is used for the scientific progress that followed for me thereafter. So what are the current challenges? One of the most influential papers of my work was written by Simpson Coffinkel like seven years ago. It was published at the FRWS conference and what it was was more or less like a position paper in particular with focus on digital forensics and incident response in which Simpson made a list of all the things that work well currently and at the moment and things that will become very challenging in the near future. Most of those things written in there have been correct. One of the things that have been written there was that there was some kind of golden age of digital forensics. So computerization progressed and people did more and more work with their computers and digital forensic techniques allowed them to look into the past more or less. So you had a file system and it was structured with a very, very comprehensive structure which you could parse and then you could sort it and timeline it and so forth which could give you a very, very accurate detail of what the person or the persons in front of the computer have been doing. Also, the challenges they faced have been rather simple compared to the complex problems that I'm going to talk about soon. Also, RAM forensics was possible so you could acquire the content of a RAM. You could acquire the content of network communications and everybody was happy more or less. What Simpson wrote was that some of the challenges that will be faced in the future are, for example, flash storage, SSD drives. So usually in an investigation, you take the hard drive, you make an image of it and then you start digging into artifacts like files or deleted files which have not yet been overwritten. The problem with flash storage is that you have up to three microcontrollers on the hard drive that can fuck you up in this process because if you copy a file to a different or if you edit a file, it is not replaced at the same position just with magnetic drives but the storage controller can put it anywhere. Compared to regular hard drives, SSDs are very complex and a lot of the responsibility of the storage controller puts something in between the analyst and the actual data. Also, what he said was lack of time, of course. This is equivalent to storage sizes. So like 10 years ago, how big were hard drives? Actually, I don't remember, but comparing, I don't know, one of my first hard drives I bought was 30 gigabytes. That was like, I don't know, 20, 25 years ago. Nowadays, you can buy 12 terabyte hard drive, which is really good. Also, the number of artifacts you could observe have been becoming more diverse. So you have a long, long list of possible vectors that you can analyze, but due to lack of time, you will focus on the most promising ones. He also said cloud. Cloud is, of course, challenging because it's not stored locally and you somehow have to get the data out of the cloud again. Encryption is a problem. I'm very much pro encryption if it's done properly. It's just very convenient. Also, the number of devices will increase and very much increased. So everyone has a pocket computer in their pocket. They have a notebook, a tablet and at least three, I don't know, PlayStation or smart TVs, cars, all those things that store information digitally, which can be then very cumbersome to analyze and making this much more challenging than it used to be. So storage capacity is a problem still today. You have, like I said, 12 terabyte hard drive costs about 600 euro right now. If you buy four of them and put them in a software rate, then you have really, really a lot of storage capacity. And one of the problems with this huge amount of storage is the time it takes before you can start looking into the data. So the forensic process usually dictates that you have to hash the data. So you build a cryptographic hash sum over the entire hard drive. Then you can copy it and then you have to hash it again to prove that you did not modify the information in any way by simply copying it. But if you think of a 12 terabyte hard drive, if you have to read it from the very first byte until the last byte, this takes something between 8 and 2, I don't know, 32 hours. And then you have to run this repeat and do it three times in a row. This is a lot of time which can be problematic in incident response, simply because you want to dig into the data right away. If you have money, there's special hardware for that. So you have so-called forensic bridges where you can plug into the devices you want to acquire and they do everything in one step. So you just have to wait for one-third of the time. In particular on slow interfaces, if you have a 2 terabyte external hard drive, which reads 20 megabytes a second, ages, just ages. Also, if you think about production systems, if you walk into a company and you want to take the productive exchange offline, this is nothing which will ever happen. Sus admins will throw themselves in your way to stop you from taking down the productive exchange system. So plenty, plenty of challenges that one has to overcome to get actually starting with the forensic investigation. So just a few examples. There have been plenty of engineering efforts to reduce the time used in these steps and to gain already insights into the stored data from the beginning. So the National Software Reference Library NSRL, it's published by NIST every three months and it contains plenty of software installations which you can hash and then exclude from the files that might be of interest for the investigation because every Windows installation kind of is the same and you have plenty of files which are redundant on every hard drive. Also, there has been work again by Simpson on identifying file fragments usually if you delete a file, of course it's not magically gone but only the reference to the file is gone. So either by using file carving or file fragment identification techniques you can figure out which files have been stored on the drive. But some of the optimizations are not that optimal. One sifting collectors, for example, proposes to use a prioritization during the analysis process. So you start with the most promising files and then you start going down. But there's a certain point in time where you have to stop looking at things and you cannot ignore the rest which is there but on the economic side it doesn't make sense to look into everything. So there has to be some form of conclusive process, some form of general agreement of, I don't know, anyone to figure out when it does not make sense anymore to dig any deeper. So encryption, of course, makes investigations tricky. The good thing during investigations is that usually people want you to look into your things but they give you the recovery key, the password and so forth. But if you want to hide something, if you want to encrypt something, this works and is usually not bypassable. Also, both devices and on the network more and more information is encrypted which is absolutely a fantastic thing. But still encryption can be bypassed, it can be fingerprinted, so instead of watching the communication content, communication metadata is inspected. And of course you can do traffic analysis on who is communicating with whom, how much data are they exchanging and so forth. Cisco just a few weeks ago had a white paper on encrypted traffic analysis. They probably figured out that the world is moving towards HDPS and everything is encrypted and their shiny boxes cannot look into the details anymore. One of the challenges which is still very much a challenge is the heterogeneity of devices. So if you are in the position of needing software for Android or iPhone, this is doable. But if you find someone with a Lumia phone and you want to dig into that phone, this is really hard because nobody makes tools for the so-called long tail where very few devices and very low probability that you will find someone who will actually pay for you to develop this. So the most common things like Mac, Linux, Windows, they work fine with the commercial tools. But if you have more on the exotic side, this can be very tricky. Also Cloud Forensics is a lie. In particular in academia there have been publications on Cloud Forensics in particular. But usually Cloud, that's at least my perception and my opinion, Cloud Forensics means either it is remotely accessible, just like EC2. You have a machine you can log in and you can do things there. Or it's some form of publicly described API which does your software magic. But both of them are new. You have methods for both ways. And you do not need some form of new methodology or new nomenclature to figure out how to do forensics on those things. And again, commercial world is already very far. For infrastructure as a service you can use just the regular tools and for software as a service like smart phone applications and so forth. Usually you can fiddle around, you can reverse engineer the Android applications trying to find the communication endpoints and then reverse the communication. What's also going to be very interesting, at least from my personal perception again, is GDPR, the general data protection legislation which will go into effect on May 2018. Because in particular for companies they have to be able to tell their employees what information they store of their browser usage, their digital artifacts and so forth. And in particular for larger companies if they haven't yet started to look into GDPR they hopefully very soon will. So what are the dos and don'ts of incident response? First rule of incident response is you have to get a RAM image. You want to get the data which is stored in the random access memory and you want to have it in a forensically sound manner. Second rule of incident response is you have to get a RAM image. Again you want to have it, you want to have it really badly. Why the RAM? Of course everything that's juicy is in there. So many law enforcement agencies still today they go for the data approach. So they storm into the building, they pack everything that looks like storage media and they drag it into the lab. But if they find the running PC usually they just pull the plug, box it and then ship it. This is very much problematic because encryption keys and open files information and so forth is lost in the process. Also it's of course non reproducible so if you have a RAM image you have the exact snapshot of what was going on in that machine at that particular point in time. Once you have the RAM image and I will shortly talk about how you can get a RAM image you can inspect the machine. You can try, for example on Windows you can use the Sysinternal tools to look into running processes or open network connections. But depending on how well you know these tools your mileage may vary. Also what you really do not want to do is to force anything that's persistent on disk. So you do not want to search for files or anything which touches the hard drive a lot. You want to focus on the things that are on that machine but do not put it on the hard drive. Also there's this tunnel conflict between rebooting the machine or pulling the plug. Usually if you have the perception that the operating system may be compromised you want to pull the plug simply because shutdown processes can be manipulated and information can be forever lost if you simply shut down the machine. But depending on the environment and the context you can also just shut down the machine because the user would shut it down anyway. Once a machine is dead it's of course dead. There has been the cold boot attack where you can get the RAM content even 10 minutes after the computer has been shut down but this is a rather esoteric approach. So in my perception once a machine is dead it's dead and it stays dead until the bitter end. Also you do not want to reboot the machine because again RAM is wiped during reboot and the information in there is lost. So for regular investigations the best case you can get is you have one machine which might or might not be compromised. You do not have any lateral network movement and the breach is contained in time. This is the best case. This is piece of cake. This is what you can easily work with. Reality however is often very very much different. For example nowadays you can have server machines with terabyte of RAM and if you want to just thinking of taking this one terabyte of RAM and making an image out of it again takes ages because you either have to dump it to an external hard drive which are slow or you have to transfer it over the network which also is very much slow. Also a breach is usually not local in one machine but it could be the entire network, it could be an entire VLAN and you want to investigate all those machines at once which can be tricky. You can have very much very fast network links tanking a bit upwards which again is very hard to acquire mostly because mass storage devices do not have 10 gigabit capacity and you need to figure out a way to get access to that data. So how can you get a RAM image? Usually for Windows best approach is FTK Imager. It's a free tool. You install it and it gives you a button that you can click on to acquire the RAM image and just pipe it onto an external hard drive or over the network. There also RedLine is from Mandiant and many of the forensic tool kits already include a free tool which you can then use to acquire RAM images. On Linux there is LIME. It's a kernel extension which you can load dynamically at runtime which gives you direct and raw RAM access. Similar thing on Mac OS. This is doable. Recall as part of GRR which I will talk briefly about in a minute works for all of the above. So if you have the capacity and the will to dig into GRR this is probably the best way to have a universal way to dig into RAM. For mobile devices on Android again there is LIME which you can push with the ADB interface. On iOS there is nothing you could try to do simply because it's designed in that particular way. So what can you or what could be your takeaway to be prepared when shit hits the fan? What can you do to have something ready? Just a few tricks to be prepared. First thing is of course lock all the things. Every machine has a lock file, every application has lock files. Best way is to centrally lock them and this can help you tremendously even though lock management is still tedious task. Best way is to just push it into a login machine or a pipeline and have it there ready once you need it. Even simple things like NetFlow informations from fancy switching boxes can help in case no other information is pertained. For networks the upper class switching boxes or upper class switches, usually what they have is so called mirror port. So they can use the uplink port and mirror it seamlessly to another port. But you can also do it on a budget so just mirroring network traffic is rather cheap. The best example is this particular, it's just a random manageable gigabit switch from Amazon which has gigabit on all the ports and offers the possibility to mirror traffic. The good thing about traffic mirroring is that it's completely passive so an attacker cannot see that you're actually inspecting traffic. I think this box is about 40, 50 euro which is really cheap. To use this it's a bit more expensive but it has an open Linux on it and you can already run analysis on that box which can filter out some of the traffic. Then again, reality kicks in. Where do you place this particular network tab? I tried to do this in SBA. The problem was that if I place it before the firewall I'd have to have numerous mirror ports. If you have to put it after the firewall I will not see the real IP because the firewall does not and the modem only sees one IP communicating with the firewall. This is a problem because I can either lose information or lose scope for only half of the building, for example. For collecting logs there is a lot of open source which is readily available. You have the Elk stack, you have Grey lock, you have OSSEC which can all pipe the logs over the network into a bin so that you have it in place once you need it. Also the Microsoft world there is something like Windows Event Collector and even though I've never worked with it it probably does the same thing, simply judging by the name. Also you can use Splunk or anything if you have enough money but the top three there are three that you just need to have a machine to have it running. Then there's the funky part. Again, reality kicks in. You're trying to figure out how to monitor this. This is again very tricky because you have a high number of devices, a large network capacity and mostly I put the picture here because you do not want to go down in the basement where the actual servers are. This is where Google Gur comes into place. I'm a huge fan of it. It's built dedicatedly for incident response and this is the system that Google uses internally for their incident response. You could also have smaller solutions like PowerShell. PowerShell now runs on Windows, Linux and Mac. There's also the Linux subsystem on Windows. You could try to work backwards to get access remotely but Gur is the thing that usually floats your boat. Again, trying to deploy Gur in the office didn't work because some complained that they have sensitive customer data on their devices which cannot possibly be exposed in any particular way. Then we have privacy and legal implications. Simply we cannot allow personal devices which are used by employees to be inspected at any time, which is a good thing, of course. What I then used it for is for infrastructure so you can have Gur on all the hosting servers that you can control and have it running all the time. I'm a bit short on time still. The good thing about Gur is that it works remote and it's remote all the time. The logic is server-side so you have your central server which hosts all the information and the way Gur works is that it has an agent running on every machine. The good thing is that installation is really simple. You click it. You do not even have to click next, next, next. It's like the perfect botnet. You just click it, it's installed and you have it up and running. Also offline clients run the tasks as soon as they're back online. Once the machine is back, simply because it didn't get the task because it was rebooting or having network issues, as soon as the machine is back it will fetch the task, complete it and send the results. The good thing, the really good thing about Gur is that it has scalability in mind. Allegedly there are setups with more than 100,000 machines monitored. It's a long-term supported project. It's publicly available, GitHub and all the things. The only downside I could figure out was that it has privacy and legal implications which are not trivial to overcome. One of the really cool things of Gur is that you do not need to transfer the RAM image. Gur offers you a straight way into the kernel of the machine so you could use volatility on the live RAM, which is really, really cool because you do not have to transfer hundreds of gigabytes of RAM and you do not need to store them and you do not need the time to process it. You simply use volatility on the running machine and everything is in place for your investigation. Another nice feature of Gur is that you could do hunts, so all the machines which are orchestrated by your Gur master, you could task them all at once, saying for example if a new antivirus, Mandia and FCQ or something, Super malware is published, usually it contains indicators of compromise and these are usually hashes of files or mutexes or similar things and you can pipe this information right into the Gur console and have it run on all machines within minutes. As you can see I have an HP notebook that was a bug in one of the audio drivers which used to lock everything, every keystroke into a file and in our office people had to walk from desk to desk and see if this file is here but with Gur this would have taken 30 seconds for every affected machine. So now a short contextual cut to work that we have been doing. Picatoriant is in a later step of an investigation of relevance because it takes a hard drive image and it can tell you information about the hard drive image simply by looking at the file fragments it finds on there. So you have the image, you have all the hashing and all the preparation has been done and what you would like then to know is what remaining artifacts can you identify on that drive what is still left there which was once possibly on there and the way we approached this problem was you have to have a lot of file hashes and one of the file hashes database par excellence is peer-to-peer file sharing because usually in peer-to-peer file sharing you have gigatons of hash values swirling around the network simply because people want to share files and they want to identify the file based on a hash value. So for investigations usually you would like to ignore all these because they are neither the cause nor the cure for your problems just in Beaver maybe is but that pool probably not. So how can you exclude this information? You have 12 terabytes of data, this is part of some office file sharing server they had I don't know Rick and Morty all of them on there and you want to figure out if this is really Rick and Morty and it's really the latest episode or not. Our approach was to harvest as many torrent files as possible and work with them to figure out which parts of the sectors can be identified. Also one of the very nice things about torrent files is that they are copyright free because the content that is used to sharing of course is copyright protected in many cases but the torrent file does not pertain the same privileges. What we then did was to extract all the hash values, pipe it into the wonderful tools, bulk extract and hash TP again open source software which is specifically built for investigations with a very specific set of tasks and we publish it last year. So just a few words about the hashes. Everything in BitTorrent is hashed with SHA-1 and SHA-1 is very common in digital investigations as well and what you would like to figure out is which part of chunks are absolutely irrelevant. The real benefit of this investigation is that it's fast. It's really fast not because we use torrent files but because bulk extractor is built in a way that it's really fast. I once saw a presentation by Simpson Guffinkel where he said that they once had access to a machine with 200 CPU cores and bulk extractor pinned all of them. So it was just pure parallelization at its best and the more cores you added, the faster it will be done. Also it can find deleted and even partially overwritten files so you can say afterwards the liberty that on this PC Justin Bieber was stored. Also the hash DB files which drop out at the end can be easily shared and I will show you that in a second. So how did we pursue that? Our goal was to get as many BitTorrent files as possible. Of course the first approach was to build a crawler that crawls Pirate Bay, kickers torrents and all those file sharing sites to get as many torrent files as possible. And in the end I think after four or five months period we ended up with 3.3 billion chunk hash values which if you add it up gives you up to 2.6 petabytes of information. So everything that has been shared with these file sharing things adds up to 2.6 petabytes and the tool chain is readily available. But what we then did and this is new, this happened after the publication of the paper we also used DHT crawler which listened on the queries that come in on the distributed hash table and we got another 2 million torrent files which we could then index and which we could then boost the capacity or the maximum amount of information up to 6.5 petabytes. So sharing is caring of course, the paper, the tools and everything is online and I'd be happy in particular to get feedback to this because the forensic community where I presented that was rather not impressed I'd say. So I'm presenting it here and I'd love to hear your feedback and if this is of value for your specific tasks or if it was wasted time. Anyway, thank you very much. Thank you very much. Thank you very much for this very informative talk, Martin. There's still a bit time left for questions. Do you have any questions? Please line up. There are two microphones. One more in the back, one here in the front. Anybody have questions? So I would have a question. Oh, there's somebody. Am I on the, yeah, good. So you looked at a whole lot of torrent files and you collected hashes from the files that are contained by the actual content of the torrent. Excuse me, could we go a bit closer? Closer, yeah. Ah, there I am. Thank you. But some of those files might actually be malicious files and malware. Did you do any filtering on that? No. So the beauty of it is that this is just an additional step towards finding out what's on it. So what you can find is the torrent file which is relevant for this particular sectors. And then you can inspect it. You can see which files are on it, which files have been identified. So yeah, there was no filtering. But yeah, what I forgot to mention is the beauty of this technique is not to confine it on bit torrent files because usually Justin Bieber or whatever is not that impressive. But you can pipe any information you have into it. So for example, if you have the suspicion that one of your developers took code, you can just pipe the code, torrent it and then apply this. Probably not publish it because then it would be public torrent. But this works without any knowledge of the content itself. So I would like to know what's your preferred system? How do you do it? On my notebook I have Linux. My servers are Linux. My PC at home is Windows. Yeah, a mixture of a bit of everything. And did you ever have a big incident you wanted us to tell about? Not that I'm allowed to talk about it, no. Okay, so are there any other questions? Okay, so thank you very much again.