 Welcome back, everyone. Today, we're talking about how to start a digital forensic investigation with autopsy, and I'm using autopsy4.19.3, but really any version of autopsy4 will be very similar to this. Before we begin, we have to get a couple things ready. First, we need to go to autopsy.com and click the big download button, follow the instructions, it's very straightforward. Next, I recommend you get HxD. It's a freeware hex editor, and it does have integrations with autopsy. A full featured hex editor is much nicer whenever you're trying to dig down into specific data structures. So I recommend getting both of those and installing them, and they do work together. Now, once we have autopsy and HxD installed, the next thing we need to do is set up where our data is going to be located. We don't want to save data to our C drive on our forensic workstation, because we don't want to mix suspect data and our forensic workstation files. So what we want to do is always have a separate drive where you're going to be saving your case data. This could be an external hard drive, it could be a specific internal hard drive set up specifically for your case data, or it could be a network share at your organization. The drive I'm going to be using is the W drive. So again, I'm not saving anything to my forensic workstations C drive, I'm going to be saving everything to my W drive to keep everything separated. In W drive, I have a couple of things already one is a hash set. And we'll talk a little bit about hash sets later. And I have my cases file. So inside the cases file, I'm going to create a new case in most organizations. If we're already in doing an investigation, then a case number has probably already been created for us. So I will create a new folder with a case number. So whatever that case number is, let's just say it was 001. I also like to give it some sort of indicator of what type of investigation I'm doing. That way I can look at the case, I might not recognize the case number, but I can really quickly recognize the tag. So I'll just put like a h here. And for me that will represent hacking, I might also usually include some sort of information like a tag about its location, a tag about the investigator that's starting it, maybe the investigating members initial, I'll just say xx here. So this structure would tell me, for example, case number assigned by our case management system. It's a hacking case. This is the investigator who is looking at it. And then this is the investigating member requesting the case. Come up with your own standard. You want to be able to look at this file and understand really quickly what it's about, even if you don't recognize the case number. And then always use that standard for the rest of your cases, just to keep everything consistent and try to use the same standard through your entire organization. Inside this case folder, I'll create a couple more folders. My basic structure is going to be docs images, temp autopsy reports. While I'm here, I'm going to go ahead and go into docs. And I'm going to create a new text document called case number docs dot text. Now I have my case documentation, open it up and notepad, hit F5 to insert a timestamp and then say case started by. So now we have our documentation notes ready to go. I'm going to go ahead and keep that up for images. The investigating member by this stage has probably already given you images or you're about to create images. So let's say that we have some suspect data. And the suspect data is in exhibit 001. So let's say a computer was brought into you or a thumb drive was brought into you that thumb drive is classified as exhibit 001. So I will just use that exhibit number directly exhibit 001 images, double click on that. And then I already have my suspect data here that's already been collected. So I'm going to move that into this directory. Now all of my image data for all exhibits plus all of my reports plus all of my temporary files and everything like that are in a single place inside this case folder. So if anyone's accessing any of the data, they know that they can just find it directly in that case folder. Everything is constantly kept together and I'm never going to save anything outside of that folder. And I'll have a link to this data below if you want to follow along specifically with my data set suspect data.dd. Next I'm going to do f5 and then say started autopsy four dot 19 dot three, because we need to know which version of autopsy we're using whenever we're processing something to process case data. Now why am I keeping notes like this? This is basically a diary of everything that I'm doing to make sure that I don't forget how I've started everything and to make sure that I'm processing and documenting exactly where all of my data is located. And a lot of this will be relevant to my final report. Go ahead and open up autopsy. And now we have a couple options. First is a new case or open a case. I haven't created a case yet. So I'll do new case, click on new case. And then we're asked for a case name. So I'm going to go ahead and copy and paste the case name in there. And then the base directory I want is w drive cases and then the case name and I'm choosing autopsy as my base directory. That way all of the autopsy case files are stored inside that autopsy folder, click select. So now I have the case number, I have cases and autopsy and then the case type is single user. There's also multi user that's possible. You have to set up a network, including some servers on that network and then multiple people can connect to the same case at the same time and start processing and analyzing different data sources in that case takes a little bit more to set up, but it is very useful in real laboratories. For now, we're just going to do single user, which means everything is going to be processed locally and only you will have access to that data confirm that the case data will be stored in the following directory. I'm going to go ahead and copy this string. And then in my documentation, say f five autopsy case data directory set to this location. So I give the full path consistently. And one reason that I'm using the case number for my paths is because I want whoever's reading these notes to see that it's always in the same directory, I want to be able to very easily show that I'm saving the data in the right location. Next, our case number, we should just put the number instead of the identifiers. And then the examiner, phone number, email, and then any notes about this particular case, we have to include the case number because we want to reference it back to this case number that was assigned by our case management system. The examiner information is necessary because we need to know who to contact if anyone has questions about these cases, plus this will end up on reports. Next is the organization. And this is the organization that you're doing the investigation for. And that way, whenever you're processing data sets across the organization, you can do comparisons of data within that organization. If you need to add an organization, I already have the FBI added here just for fun, but we can just go to manage organizations, click new, we already have FBI, so let's do CIA, point of contact, John Doe, email, phone. All right, so now we have a CIA contact and an FBI contact. So click close, our organizations are set. So now we can select which organization it is that we're processing this case for. Okay, so I'm going to select CIA, click finish. Next thing we want to do is figure out a host name. So a computer can have multiple hard drives in it. So we can specify a host name and then attach multiple data sources to that host. So if you have a network dump, if you have a RAM dump, and you have a hard drive image, you add them under one host. So I'm going to go ahead and specify the host and I'm going to call it exhibit 001. So under exhibit 001, which somebody has obviously already seized and imaged for me, I'm going to add our disk image. So our new host name is exhibit 001. If I had a different computer or another device, it might be exhibit 002 or whoever was making the initial documentation, I would use that exhibit number. Click next. Now we can choose if I want a disk image or VM file. Disk image is what I currently have. It's a raw disk image. So if I go to images, go to our exhibit 001, and I have our suspect data dot DD. DD is almost always a raw disk image. It's an exact copy rather than some type of compressed format like expert witness format. So I have a disk image. This is also a virtual machine file. You can feed it basically a virtual machine disk image directly. Next is a local disk. If I've connected the suspect's disk to my forensic workstation using a hardware right blocker, then I can select local disk. And then I have the ability to pick that suspect disk and read the data directly. This is much, much faster. The problem with this is that that suspect disk, the hardware could go bad while I'm analyzing it. So it's always better to work off of a disk image and specifically a copy of a copy. Next are logical files. If we've only collected files from some suspect device or maybe a cloud service or something like that, we can dump all of those logical files directly into autopsy instead of having an image file. So the way we access these are a little bit different. Unallocated space image file is specifically looking at unallocated space images. And then autopsy, logical imager results. Autopsy does have a logical imager, and you can import the results directly. And there's also X, R, Y, text, export, which is from mobile phone analysis, software, X, R, Y. We're going to do a disk image, click next. And then where is this disk image located? So I'm going to go to browse, and we are in cases, case number, images, exhibit one, and then suspect data dot DD. So that looks okay. The next thing we have is the time zone. I don't necessarily know where this suspect data DD came from, which time zone it was in. Normally it's going to be your local time zone, especially if it's, you know, the suspect was living in your time zone, most likely their system will be set up for your time zone. But if you don't know, I always set it to GMT plus zero, UTC. If you do know, then click on it and then select your time zone from the region. But I'm just going to use UTC here. Anytime I don't know, I always use UTC sector size, keep that at auto detect. Although if you do know it, you can select it specifically hash values. Now hash values are used to do some verification and end up in your final report. So we do want to enter them here. So I'm going to go to the suspect verification report that was given to me, it was created with hash deep, and it has two hash values in it, the MD five hash value and the shot 256 hash value. I'm going to copy the MD five hash value, and you can usually find these in whatever reports was given to you, paste that directly into MD five, the next hash value is shot 256, and copy that and then paste that in. Like I said, this ends up in reports and it can be used to do some image verification later, but it says these values will not be validated when the data source is added. So you have to explicitly verify the source. Okay, click next. Now we have our ingest modules, and this section is how we're going to process the data that we just gave it. So the suspect data dot DD, we're going to pick through it and get out any interesting information that we possibly can. So let's take a look at these default modules, you can add additional modules, and you can even write your own modules if you want to. So the recent activity goes through and looks at things like web browsing, activity, recent documents, recently installed programs, any recent activity from a system, especially a Windows computer, it will try to extract that recent activity and then have a special category easily accessible for the investigator. So this is a good module to always run because we're usually interested in user activities whenever we're doing an investigation. Next is hash lookup. And what this does, we can set hash databases of known good files and known bad files. And with known good files, we can use that hash database to filter files that we know are good, so we don't necessarily want to see them in autopsy. We can also add known bad hash databases, where if any file matches a known bad hash, then it's automatically flagged for us for review. So it makes investigation really easy if we can share these hash values and then it'll help us to reduce the amount of data we have to look at and also flag things that we know are definitely going to be suspicious. I don't have any hash databases set right now, but what I do always check is calculate MD5 even if no hash set is selected. So the MD5 hash value of the files that are being processed, we do want to create a hash value. So whenever we're talking about those files or extracting them, we can give that hash value as well. So make sure that calculate MD5 is checked, it will take a little bit longer to calculate those hash values, but in most cases, it is worth it. If we want to add a hash database, we can go to the global settings button, and then I can either create a new hash set or import hash set. If it's a new hash set, then I need to add hashes manually, otherwise, it'll be empty. And then import hash sets is what we normally use, and I'll talk about hash sets in a different video. The next file type identification, so matches file types based on binary signatures, we can set file types that we want to match in the global settings. So custom mime types, let's go ahead and create a new file type identifier. So the mime type that we want, and in Linux, you can use a command file dash I to get the mime type of a particular file. So I'm going to do file type dash I and then I have a hash database here, I'm going to find its mime type, its mime type is application CSV, I'm going to copy that. And then I'm going to add a signature. And the signature in this case is bytes in hex. I'm going to use xxd with the hash database, and then pipe that into head. So we can get the first bytes here. And actually, this is just a header. So it's not going to be that interesting. But let's say that I know that this signature 2253 is going to be interesting for multiple files. And what I can do is say signature 2253 in hex. And the byte offset here is zero. So I'm essentially looking for a header. And then relative to the start or the end, I'm going to say start. Okay, so now I have application CSV mime type with a signature 2253. And if I find that, I can flag it automatically. So for example, we can check alert as an interesting file when found, click okay, and then I have application CSV and my signature. So you can make custom signatures based on any data structure. If you give a hex or binary value, find the data structure that's interesting, and then flag where that data structure is. It is interesting if you want to do advanced file structure analysis and then automatic detection. Next, we go into extension mismatch director. This one's a little bit more easy, I think to understand. Basically, we have a file signature. And then we also have a file extension. So next I have test dot hwp and hwp file is a hungle word processor file. It's basically a doc file for Korean characters. If we do file dash I test the hwp, I can see it's an application slash xhwp file type. And then we have the hwp extension. So that's actually what we would expect to see. But if I find the xhwp file type and the extension is not hwp, then I know something is suspicious. That's essentially what this extension mismatch detector is doing. So here I can select what types of files I want to check check only multimedia and executable files will make things run much faster. But if somebody is trying to hide a file, you might want to check all file types, they might be naming like a doc file to a jpeg. And then whenever you double click on the jpeg, it doesn't open because it's actually a doc file. macOS and Linux use the file header to understand how to open the file, but Windows uses the file extension. Next go to global settings. And then we can see the file types. This is the file header type. And if we click on PDF, for example, then we can see the extension that we will accept. I know that hungle word processor is not usually in the extension mismatch identification setting. So I'm going to add hungle word processor, I'm going to click new type. And then the mime type that I want is application xhwp, and then paste that in. Okay, application xhwp. And then I want to add a new extension that's acceptable. And it's hwp. So now, if it finds an application xhwp, and it doesn't have a hwp file extension, it will flag it as a suspicious file extension. Click okay. Now we have our extension mismatch detector. Next embedded file extractor. This is fairly straightforward. There's a lot of files that are actually compressed, like zip files, essentially. And what this will do is go in and decompress everything, take everything out of the file, and then index all of the files that were inside the compressed file. So we want to keep that checked pretty much all the time. Picture analyzer, same thing. We're looking at images and extracting things like XF information from JPEGs. XF information is really useful because it might have locations, it could have timestamps, it could have, you know, editing programs that were used to modify the picture, the camera settings. So we definitely want to keep picture analyzer. In most cases, it's very relevant. Next is keyword searches. And you can do quite a few things with keyword searches. By default, they have phone numbers, IP addresses, email addresses, URLs, credit card numbers, and credit card numbers, URLs, email addresses, and IP addresses are all fairly universal. Phone numbers by default are set up for the US style phone numbers. So you might want to modify that if you're looking for other phone numbers. By default, it's also only looking for Latin basic. So we will need to modify that if we want to support other character sets. Okay, then we have the option to enable optical character recognition. And this looks at images and attempts to extract text from those images. Really, most of the time, we want to turn on optical character recognition. The problem is that optical character recognition takes quite a bit longer. So if you enable it, it will take longer to process all of this. But you will be able to search for images with text in them, which is extremely useful in most cases. If you know you're looking for certain types of keywords, make sure you select these boxes. I usually turn on optical character recognition because I found it so useful. And then now we need to go into global settings and configure some settings for this. What we can do is set up first our keyword lists. So I might create a new list. And it might be called, for example, drugs. So I want to search for any keywords related to drugs. And I can add as many keywords as I want here. And I can do, for example, a new keyword. So I added two keywords here. And then I can do exact match, substring match or regular expression to make sure I don't get too many false positives. I'm going to do exact match. You'll get a lot fewer results, but also fewer false positives. And then we have regular expression matches. Regular expressions are an advanced form of pattern matching across keywords and extremely powerful tool in digital investigations. We'll just do exact match in this case, click okay. And then we can see the keywords that we have associated with our keywords list. As you do more investigations, you'll find there are patterns and keywords that you get for certain case types. Whenever you detect those patterns, create a new keyword list for that type, and then add those keywords to it, and then reuse it in all of your cases. It will save you a lot of time. If you're just looking for English keywords, just click okay. But before we leave, we can go to string extraction. And this is where we can select other character types that we want to include. So for example, I sometimes search in Hangul, Korean characters. So I'm going to enable Hangul Korean character sets. And you can also enable a lot of other character sets. So if you're trying to search for any other character sets using keyword searching, make sure you enable this first. And this is a global setting, it will be there for the rest of your cases. So make sure string extraction, you're selecting the alphabets you want to use, and then lists, make sure that you create lists for any keywords that you know tend to show up for certain case types, click okay. Next is email parser, and this just parses out PST or OST files and any other email types that it can try to find. So we usually want to keep that enabled encryption detection. This tries to find encrypted container files using entropy testing. This takes a very long time. If you have a good reason to suspect that encryption is being used, you might want to keep it on. Otherwise, if you uncheck it, it will save you a lot of time. Interesting files finder, you can set up what you think are interesting files by default. We have cloud storage, cryptocurrency wallets, encryption programs, and privacy programs only for Windows hosts. So this is by default set up for interesting files on Windows. If we want to change that, we can either select which categories we actually want to include or go to global settings, and then we can see each category and add or remove programs to it. And we can also create a new set of what we think are interesting files. If there's a program or set of programs that you're looking for, and they're always involved in the cases that you're investigating, you definitely want to make a filter to automatically scan for them using these interesting items settings. Click okay. Central repository. This is a really interesting feature in autopsy. What this does is creates a local database that keeps track of some of the files and activities that you've seen in past cases. If there's a file that you flag in one case, there's another case that you don't think is related to the first case. Central repository can pop up and say, hey, you've already seen this file and you flagged it in another case. Do you want to take a look at it now? So it can really help you to find patterns over cases. So I highly recommend using the central repository because it can show you when there's connections to other cases that you might not have known were there. Now, none of the original suspect data is saved. It's all going to be via hash value. So a hash value of file is going to be created stored in the central repository and logged which case it was involved in, but you won't be saving original suspect data. Now we have central repository. We can save items in the central repository. I highly recommend you do it. And then we can do things like flag items previously tagged as notable. So if we've seen them in a past case and tagged them as notable, then flag them. And I would definitely check that flag devices and users previously seen in other cases. If you've set up and you're managing devices and users across cases, then I would select that just so you know if they're related to this new case and then flags, apps and domains not seen in other cases. So what this can help you do is find applications that you've never seen before, which basically helps you to focus in on things that are unique to this case. So I would also select that. This is such a useful feature here. Next is photo rec carver. And this carver is used to find additional data that has been deleted or unallocated, removed, whatever, it will try to carve out any of that data and then basically present it back. So we have a couple options here, keep corrupted files or focus on certain file types. A lot of people just want to focus on, for example, JPEG PNG or zip. I'm going to just carve everything carving everything does take longer, but you possibly get more data back. The photo rec carver is basically for deleted data recovery. It's a really interesting program. Next is the virtual machine extractor. If the image that you're analyzing has a virtual machine inside of it, the virtual machine extractor will extract that virtual machine and then treat that virtual machine as a separate disk. So it's very similar to the embedded file extractor, except it's specific for virtual machine files. Next is data source integrity. And this is calculating the source hashes verify existing data hashes verify data source integrity is just verifying that the hashes that we're dealing with are okay. Remember, we already copied in the MD five and shot 256 hashes. So it will try to verify against that. Next is Android analyzer. Our disk image does not have anything to do with Android. So I'm going to uncheck that. But if you are analyzing an Android device, make sure it's selected. A leap is a great tool for parsing out Android data structures. DJI drone analyzer. We're not looking at drones today. So I'm going to uncheck that plasso is a another forensic tool that's really comprehensive and trying to extract a bunch of different things. I usually keep it unchecked because it does take a long time to run and they do say that it duplicates autopsy modules. If you want to be very thorough, you can enable it but it will take much longer to run your processing. Yara analyzer. Yara is a really interesting pattern matching tool. We can write Yara scripts to try to analyze a file structure and then flag those files. It's really similar to file type identification. We use it a lot for malware analysis. So if you're doing any type of malware analysis, people share Yara scripts or Yara patterns. You can just add those Yara patterns to autopsy and then detect new malware in a system. It's really interesting. Next iOS analyzer using iLeap. We're also not looking at iOS devices like iPhones. So we're going to leave that unchecked. And then GPX parser. If we do find any GPX files, then we can get some potential location information. I know there's not any in our image, but it won't take that long to analyze. We also have another Android analyzer. So again, if you have a Android suspect image, then you want to select those are all the default modules. You notice that I kept most of them on, but I try to remove anything that I know that I don't need or that's going to take much too long. So the more modules you enable, the longer processing is going to take. Usually we want to find a good balance between processing everything and taking a very, very long time. So if you don't suspect encryption, remove encryption detection. If you're not analyzing Android devices, remove Android device analyzer, things like that. Just be conservative with how you're running your modules. So now we've selected everything we want to select. We can click next. And now it starts to add the data source and starts processing. You can see in the back, we have some messages popping up and we have this progress bar. If I click finish, the progress is going to be really quick because this image is very small, but now it's processing and now it's done. A normal disk image will take a very long time. So you can expect like a one terabyte hard drive of a normal Windows computer to take at least several hours to process and sometimes up to 24 or maybe even more. We have our data sources. If I expand that, we have exhibit 001 and that is the device name that I gave it. And then I have one hard drive under exhibit 001 called SuspectData.DD. If I select SuspectData.DD, I can also expand it and see the file structure, or I can see the files in this main view. In the main view, I can see things like the file name, special attributes, modified time, change time, access time, size, any flags, MD5 hash, SHA256 hash, MIME type extension and the location relative to the disk image. If I click on any of these files, then I get the main view in the bottom, we're in the application view where it tries to show what the data actually looks like. We also have the hex view where I can see the raw data of the image plus the ASCII view next to it. You notice whenever I'm selecting this, I can launch an HXD if I have HXD installed. So if we click that, then HXD pops up and I get a lot more flexibility in searching with a hex editor like this. So next I have the text view and we have the indexed text. So this is information about the indexing and the file itself. If I go to strings, then these are the raw strings that I find in the text that are also indexed and I can do keyword searches over. We can also go to file metadata and see everything like modified time. Again, it's in UTC because that's the time zone that I set other occurrences. So if we've ever seen this file before, if I have correlation set up, we can know when this file was seen. So in case X that I had previously, I had source name, suspect data dot DD, it's the same suspect image, we have the same file name and hash value. So I have seen this before in a prior case. So once I see this, I can say that maybe our current case and case X are related, especially if this file was a suspicious file. Okay, so other occurrences can be interesting whenever we're looking at suspect files. Most of the time you're going to be looking at things in application view or hex view. At least that's where I spend a lot of my time. So we have our data sources tree, we have exhibit one, and we have one image under exhibit one and suspect DD. We have all of the files available in the file view. And then whenever I click on them, I have the detailed view in the bottom. And most forensic tools are set up in this kind of workflow. If I was processing a bigger case, we'd also have a lot more things showing up over here. So let's go ahead and look at these views first. So under file views, we have some filters that are automatically created, we can filter for example by extension. So images based on their extension, we can also do documents if there's any documents in the folder. So many times in investigations, we want to see all of the images in the system or all of the documents in the system. We don't really care where they're located, but we need to scan through them to see for example PDFs and see if anything suspicious. So using these filters is a really quick way to focus in on just the file types you want. We also have executable file types as well, like maybe we want to know all of the .exe files that are in the system and then just look for anything suspicious there. After that, we can also do it by MIME type. We've talked a little bit about MIME type so far, so we can either do applications, images or text. So same principle as by extension, except this is using the file header instead of the extension. So next is by deleted files. So we can have files deleted from the file system that were able to be recovered or at least partially recovered. And we have all data that was recovered, whether it was in a file system or not. But under file system, we have four items. Under all, we have six items. And you'll notice that those four items are also in the all items, but we have these two extra carved files that are available that were probably carved out of unallocated space. If you have carving enabled, you might see more in all. Make sure you check what data has been carved. Next, we can also filter by file size. So by default, we can do 50 to 200 meg, one gig and one gig plus. So sometimes it's interesting to filter and say, hey, only show me the devices that are over one gig, because maybe they're encrypted containers containing a lot of interesting information. Or, you know, if you have an Excel file that's like 16 gigabytes, that's very suspicious. So you might just want to take a quick look at anything over one gig. And sometimes it can lead you in a direction. All these are meant to give you a quick view into odd situations and odd data that might stick out at you. Next is data artifacts. I didn't have very much data inside this disk image. So we don't really have any data artifacts that were carved out, but every all of the ingest modules that we selected, if any of them got a hit, it will show up in data artifacts or analysis results or OS accounts. So since we don't have an operating system on this disk image, they don't show up here, but this is essentially the filters that will show all of that information. So one workflow that I normally do, once an exhibit's been added, we have our suspect image and it's been indexed, processing has mostly been done. Then what I tend to do first is keyword searches. We might have already had our keyword lists, but now I'm going to do keyword searches specific to this case. Keyword lists have general keywords that might be relevant to a similar case type. Keyword searches will be specific to my individual case. So I know, for example, that this disk image has something related to cats. So I'm going to do cat. I'm going to do an exact match. And then I want to search suspect data dot DD, and I'm going to save the results. So I'm going to click search. And then what happens is we have our analysis results. We have our keyword hits for cat show up. So the string literal keyword search, we can expand that. And then we have our search for cat, and it's a string literal. So now I have a filter set up for that particular keyword. And now I can go through and look for anything that I think, you know, might be interesting. So let's say that I think this image is interesting. The next thing that I would do, if it's related to our case, is right click on it, and go to add file tag. If I know it's related to the case, I would probably do either bookmark or notable depending on what type of file it was, depending on how it was related to the case. So let's just go ahead and bookmark it. And then we find another picture. Let's say that I know that this image is definitely related to the case, and it is notable. So I'm going to add file tag, and then go to notable. What we tend to do is tag items that are related. That way we can filter out the related things and then build our story around those tagged items. So your report will refer to those tagged items, and then say why they're relevant to your case. All right, next. So keyword search cat, it gave us eight results. Next, I'm going to do a keyword search again for cat, but it's going to be a sub string search. So I'm not going to save the results search, and a new tab is created. So in this search, we got 10 results in cat, we got eight results. And the difference is when keyword searching doing exact match, cat must be by itself. So you can have dash cat, and it can be capital or locate or lowercase, but it must be essentially on its own. There can't be other characters around it except things like dashes or something like that. But with a sub string match, we can have things like cats. So cat is a sub string of cats. So we actually returned two additional results by using a sub string match. So let's say that this one might be relevant to our case, but I'm not really sure if it is. So I need to come back to it. I'm going to add file tag to follow up. So I tend to go through everything and tag everything as follow up, unless I absolutely know it's relevant to our case. And then I'll mark it as notable, but I use follow up a lot. And then I go back again, once I've done my preliminary examination, and then we'll follow up. Now you might be thinking, why am I tagging things? Well, now I can close these searches and we can go into this tags folder. And in the tags folder, I'm going to expand it under bookmarks. I have one picture bookmarked, and then I have the picture view. So I can access that picture directly along with all of its data for follow up. Same thing. So I can go back to follow up and say, oh, all of these could have been related. Now I need to do a little bit more analysis on each of them. And I have access to them directly. I can right click on a file, I can do, for example, extract a file and then work with it in other tools if I need to, or I can just start to do some analysis on it directly. I can also see a text view and everything just like before. And then the same for notable items. I already have notable items. And then I can start to use this to build up my report. I can say that these are related and also how they're related, how the suspect was using this data. When did the suspect access this file? When did the suspect download this file? Things like this are what I'm going to need to answer in my report. So I would start to use tags for images and then probably go into like Windows registry artifacts and things like that to try to get user activities to say what this user was doing with this file. So we did a keyword search, we started tagging things based on the keyword search, we found one bookmark, one follow-up, and then one notable item. So the next thing I might want to do is go to generate report. And what this will do, there's a couple different report types, but the most basic is going to be our HTML report, click next. I'm going to process the suspect data dot dd, click next. And then I want to do which data to report on, we can either do all results, all tagged results, or specific tagged results. So I'm going to do specific tagged results and I want to make a report that includes books, marks, and notable items. So I'm going to uncheck that, click finish. This will generate a report about our data that we've already tagged. If I click on this link here, I can see the report file and it has some of our metadata from one of we started the autopsy forensic case, all of our locations that should match our documentation. And then on the left hand side, we can see tagged files. And we have our bookmark, which is one of the cat pictures with its metadata. And then the notable item also bookmarked with its metadata. And if I click on any of those links, then I can see the file directly. So it's been exported with our report, I can get this kind of overview. So I would have all of my notable images, for example. And if I click on them, I can access them. I can now copy this report out and give this to anyone who I'm reporting to and say, here's the things that we found that are notable. This fits along with the report that I'm writing. So my final investigation report, I can refer back to these images in this exported report file and say, here are the images that I'm referring back to, please see them in this report. Now if we open up our directory structure again, I am going back to our case folder, go into autopsy. And then we have our case file for inside autopsy. And I have our reports folder. And our report that we generated are in this file directly. So the report plus all of the content, which are all of the bookmarks images, everything like that thumbnails that were created. So all we have to do is make a copy of this folder. And I'm going to take it out of the autopsy folder and then put it into my reports folder. And then I'm going to name this supporting evidence. So supporting evidence. So I would have some sort of like word document or PDF for my main report article, like the write up of my report. And then I would be referring back to this supporting evidence inside these HTML reports. With the HTML report, it's very easy for people to see exactly what you're talking about. And then I would also potentially include these images in my report while referring to this case. Okay, so once you're done with your report, go ahead and click close. And then under reports, you can see the report that you generated and when you generated it. And then if you right click on it, you can also open report and then get the report back. So so far, we've added a data source. Once it was done processing, we did a keyword search for cat and found some responsive files to cat. We also looked at our file views, our analysis results for search terms, we tagged a few items, and then we generated a report based on those tagged items. And that's really a workflow that will work in most investigations. There's also quite a few other tools that are useful here for this data set. The one that's probably going to be the most interesting are images and videos. If you click on that, we open up a new utility. And this will give us a gallery view of all of the images and also let us flag things really quickly. So if you deal with a lot of images and video, this image video gallery editor view really helps. So really, if you can add a data source, do some keyword searching, sort by images and videos, and then do some filtering. If you understand about tagging items, and then if you can generate reports based on things that you tagged, then you can do at least a basic investigation. That's really all it is is loading up the data, searching through it, usually using keyword searching first, and then finding anything interesting, flagging it and then building a story around those things that are interesting. Usually the story is about why a user was doing a specific thing. So that's it for today. Thank you very much.