 Welcome back, everyone. In the last video, we talked about autopsy 419 and we had a data source that we added. It was this exhibit one and exhibit one didn't really have very much data in it, but we talked about how to analyze it using the default ingestion modules in autopsy. So since there wasn't very much data in suspect data dot DD, it was just a thumb drive. We couldn't really see all of the features of autopsy. So I ingested another bigger image that took much longer to process. And we have a lot more realistic data to look at. And I want to describe two things we didn't really talk about last time. And that is the data artifacts section and the analysis results section. If you want to get access to this image and process the data in autopsy yourself, it will be linked below. And it's a Windows disk image. And we have a couple different partitions here. Let's choose the bigger one. And then we scroll down and we can see all of the files in the root directory. We have our MFT here. So we know that this would be like the C drive, if that's what the system drive is set to. Now we can basically just go through and search. Let's go ahead and go into users. And then we have our user in this account is John Doe. And then in the user's directory, we have NT user dot that and that would be the registry hive for the user account basically where most of the user settings are saved in Windows is in the user directory and then NT user dot that file. So if we click on that, you can see down below, we talked about the application view last time. And since this is a registry key, the application view will be kind of a registry viewer type view. It's not super powerful, but it's enough to do some investigations of the Windows registry. So we can go through this Windows disk image, just like we went through our USB stick image, you can see that it is a different exhibit. We already talked about searching through and especially using keywords and keyword lists to search through all of this suspect data. So I'm not going to cover that again, but what I am going to show are the data artifacts and analysis results section. So first, let's go to data artifacts and expand that all of the modules that we selected at the beginning, when we added the data source and now we're going to process that data source, everything that we processed will show up in data artifacts if it has a category associated with it. So some of these things are very basic, but also very useful. So for example, looking at all of the installed programs, we can scroll through here very quickly and then see what applications are installed in the suspect system. And then for example, quick crypto 4.1, that might be interesting for us. And we can also see where we're getting that data from when it was installed and its name. So now we know very quickly, hey, crypto, that could be a concern because it might be talking about either cryptography as an encrypting some data, or it could be maybe cryptocurrencies. So we might have to deal with some sort of cryptocurrency wallets. Next is the filezilla client, which is an FTP client, which means uploading and downloading files on FTP servers. That's not usual to see on a system unless they're, you know, doing some sort of web development or something like that. So if you see it, they are using FTP and it might be interesting to look at the configurations and see what FTP servers they've been connecting to. Next is Angry IP scanner. That's really not common. For some reason they were scanning IP addresses. Why were they using this information? So just from a quick look at installed programs, we already know quite a bit about how this system was likely used. Obviously they have some sort of skill in computing. At least they know about cryptography, IP scanning and file transfers. So we already need to be a little bit aware in PCAP associated with Wireshark. So they know about, you know, network monitoring and then in map another scanner. We have some information there just based on the installed programs. Now from there, I can say, okay, maybe some of these are related to the investigation question that I'm asking. If I think any of them are, now I can go investigate those programs directly. So this is the lead that I'm looking for. All right, next, let's look at metadata. Metadata is getting pulled out of a couple of different locations. So for example, PDFs might have metadata, JPEGs might have metadata inside the object, and usually we'll get things like file timestamps, the owner, like whoever created it, and then date modified and, you know, versions, programs that were used to create the file, things like that. So metadata is fairly interesting, especially if you already have timestamps like a timeline that you're looking to create. Metadata looking at names associated with different objects. So here we have the same name associated with these three objects, even if they had, you know, a different file name, we can still see that the author is set to the same. And then the organization's Microsoft, and then the program name that's associated with that metadata is a really interesting source of information because all of this metadata is embedded in the file itself. So timestamps, for example, on the file system could be modified. But if those timestamps are saved inside the contents of the file, then people might not know that they're there. It might give you a more accurate or realistic timestamp. It might give you things like author names, programs that were used to create that file. Next is operating system information. So this is just about the operating system itself. This is being pulled from Windows registry. And one of the most important things to look for here is under software, it's a Windows 10 home system. And then we have the path that it's installed in. So C drive windows. So now we can confirm that the system drive is the C drive. You know, before in the last video I was talking about the windows can be installed in a different drive, you could have different system path. Now we've confirmed that it's a C drive here and we find that from the windows registry. The source file path is config software. So this is a Windows registry hive and that's where we're pulling our system path from. So if you need to know, you know, what is the system path and where are we getting that data from? Well, it's coming from the software hive. The owner is set to John Doe. Of course, that doesn't necessarily mean that John Doe owns the system. It's just helps us to establish that John Doe might be the owner or the original person that installed the system. Next recent documents. Recent documents are interesting because this indicates user behavior. What files were recently accessed? Where were they accessed from? So you can see a couple of things here. We have a C drive. This password list was accessed from the C drive. But then we also have a password list accessed from the E drive or some file accessed from the E drive. But it does look like a password list here. Based on this link file, we have our recent documents accessed from E drive. If external drives were not collected, I know that an external drive or some sort of extra disk exists that the user was interacting with on this date. And I should get access to that as quickly as possible. Maybe we have to go back out and, you know, seize another drive because we missed it. Users accessing illicit contents might have it on an external drive. And the first responders didn't know about it. Maybe it was hidden somewhere. And so they missed it. Well, when we analyze this, we can say, hey, all of these images and videos that have very suspicious names have been accessed from an external disk. We can investigate more about what E drive likely was, maybe even find its serial number, and then go back out to the suspect's house and seize that disk if they haven't already destroyed it. Recent documents is extremely useful for our investigations because you can see that where the application is being run from. And you can see if additional data sources might be available. You can also see what they're kind of interested in. So here we have better better cap for Windows. It's kind of an ARP spoofing or hacking program. We have password lists. So this person is at least interested in hacking concepts. Is that illegal? No, but if that's related to our investigation question, then these are very relevant. More accessing from E drive. Now we have E drive images that were accessed. So on that E drive, we have not only a password list, but also some images that might be interesting to us. Now, of course, we can't just click on this link file and get the image data. This is a link to the file that was accessed. We won't get file data, but we can get some metadata and things like timestamps when the file was accessed. Alright, so recent documents really interesting for investigations to get an idea of specific user activities. This is also extremely helpful because these links are only created whenever the user accesses the file directly. And then you have user knowledge of the individual files. And that's actually more important than establishing what the content of the files are sometimes. So next recycle bin. And we have a single file listed here. We get the full file name. We also get the time that the file was deleted. If we look at this in hex view, we can see some of the original data because it hasn't been completely cleared yet. We also have some of our text strings here. This file was deleted and in the recycle bin. And we can get some information about it and then potentially recover this file just by right clicking and going to extract files if we need to. But normally I would just click on hex. And then, like I said last time, I have hxd also installed. So I would just click on launch in hxd. Now we have a whole view of our file and it does actually look like a word list. So next we have run programs and run programs is pulling from a couple different data sources. But one of the data sources that I like the most is coming from these prefetch files. They give us the date and time of the prefetch file. And if we click on one that we're interested, for example, maybe I'm interested in in map, we can see, for example, we have in map setup.exe. So this was ran at 515. But then we have in map ran at 537 and map the last time it was updated at 537. So if all of these are interesting, like if I need that timeline, maybe the suspect was scanning different websites or scanning a network, then I know that these are all related and that timeline is also related. So I'm going to right click on in map setup.exe prefetch file and I will add result tag as bookmark, probably a notable item, but I'll say bookmark for now. And then I'll also bookmark the other two. So now later I can go ahead and build my timeline saying that in map was installed at this time. So we have evidence that in map.exe was ran at 515. And then we have in map actually being run the most recently at 537. So that's relatively consistent with a timeline. The next thing I would probably do is go to web accounts or web information and see if in map 791 was recently downloaded. That would be another interesting event that likely happened. The user probably downloaded the in map installer, ran the installer and then ran in map. So you can build out these timelines and I would be tagging all of the relevant information as I go along. So run programs is interesting because these programs at least the window system believes that they've ran with prefetch files. We can also get things like count. Count isn't always 100% accurate, but at least you can see that things have been run. So theoretically, if we're building up our timeline here, we have in map setup ran at 515. And then we have in map ran three times with the most recent time being 537. Okay, so between 515 and 537, we have at least three runs of in map. This is a command line program. So we can try to investigate the command line and see if we have any evidence of prior running of in map and then what the commands were whenever we were running in map. I wanted to check if that's true. So I go to exhibit to windows 10. I'm in the basically there C drive. I go to users and there's only one user here and that's john do inside app data, roaming Microsoft windows PowerShell PS read line. We have console host underscore history dot text and this is the history for PowerShell. So if they were using PowerShell, I expect to find some commands in here. We do see some commands. And whenever we scroll through this command, I have better cap a couple different times with all of the options. We also have IP config and we have three instances of in map. So specifically, they're interested in the 10, 0 to 15 IP address and then the entire range twice. So they did run in map three times from PowerShell. Now, do we know exactly when they ran that not exactly. But what I do know is that it must have been between the time that in map was downloaded and installed and the last runtime. So most likely this last command here was run at the last runtime and the prefetch file was updated. I know that these two other commands were run in that 15 minute period or so. So this is also relevant. So I'm going to add file tag and I'm going to bookmark. And now I have a lot more support for the commands. I can see that the in map command was installed. I can see now using this file, I can see which options they were scanning. And then we also have evidence that the in map was only run three times and we have evidence of three runs, which means that we pretty much reconstructed the entire in map picture, at least for this timeframe. So that's really how I use these things. We have our data artifacts and this gives you a quick overview. So for example, run programs, we see something suspicious, we flag it. And then based on whatever we flagged, then I either do keyword searches or I go through the exhibits and find other data sources that I know should exist like command line history. And then I go check that because I think it's relevant to whatever it is that I'm looking for. In this case, in map and we were able to reconstruct some of that timeline. Shellbags is also interesting. This is coming from the Windows registry and Shellbags can actually give us a lot of information about reconstructing folders that the suspect was accessing folders and files the suspect was accessing. There's a lot of really interesting information here. And all of the Shellbags are created every time a folder is accessed. So if a suspect opens up a folder, you're going to get a Shellbag entry for that particular index. So you'll have things like C drive. This is going to be updated all the time pretty much because C drive is pretty much always accessed things like D drive. Okay, we can now know that there's also a D drive. Remember, we also had an e drive from recent documents. So we now know that D drive was accessed, C drive was accessed. So from userclass.dat we're parsing out the paths that were accessed. We have things like e drive and then e drive ftk imager. So just like our recent documents folder can tell us things that were accessed. This will tell us directories also that were accessed and approximately their time of access. So again, Shellbags is interesting because it helps us to establish that the user knew about certain directories USB devices attached. I use this almost first thing in a lot of cases, we want to know what user USB devices were attached to the system. And we can see here that an LG what G2 G3 Android phone was likely attached at a certain time. I would be interested in getting that phone if we don't already have it. And we might be able to get things like the device IDs. So this can be useful for a lot of different things. First off, establishing that the suspect has a particular type of phone. So if we were able to seize the computer, which is what we're analyzing, and they said they didn't have a phone. Well, if we see this, then some phone was connected to the system. So is that the suspect's phone or was it somebody else's? This could also be useful for other things, such as saying two suspects claim that they don't know each other. But if we look at the records of connected USB devices, we can find out that maybe, you know, suspect A charged their phone in suspect B's computer, where they say they'd never seen each other before. But this could be evidence that actually links them physically at a location with certain devices. So these can be used for a lot of different things, like trying to connect users to each other. If they say that they don't know each other. Well, then why was your phone connected to this person's PC, for example. So USB sticks extremely valuable to know whether you've actually collected all of the data. So things like recent documents, shell bags and USB devices, if we can triage, these are really useful to know whether we actually got all of the devices or not. Next web accounts. So anything that was logged into will basically be here. We have a local web account just on Google Chrome, but probably the more interesting here is a proton mail account, and they accessed it using Google Chrome. So now we potentially know when proton mail was last accessed, but we should go to web history if we want to find more details. We don't actually get the username, maybe they didn't log in, but we can now be on lookout for any proton mail accounts related to this suspect. So next is web cache. And web cache really mostly is useful for understanding the domains and when those domains were accessed, looking at things like the headers, content link, things like that. And then we have the actual data itself. The data itself can be reconstructed. A lot of times it's just going to be these URLs, but sometimes you might have some interesting data in here or something indicating that your suspicious data was accessed, and then that data is still available in web cache. But again, mostly we use it for things like domain and the date created. Web cookies, basically the same thing. Cookies give us the domains that were accessed, the time that they were accessed, any properties as well as the program name. So which browser were they using whenever the cookie was enabled? If we click on it, then we can see, for example, any of the data that was set in the cookie. So we might be able to get things like session IDs or anything like that. So cookies are really interesting for establishing that somebody was visiting a website, but then the value of the cookies can potentially be used later if we need to authenticate to an external service. So for the investigation, cookies are interesting for time lining as well as the settings of the system while we were looking, as well as the settings of the system while the user was using it, but extracting, being able to extract cookies and then replay them to get authentication to websites might be interesting to investigators if they are allowed to do that. Web downloads, looking at files and programs that were downloaded, especially where those programs were coming from can be very interesting. This is super interesting for things like malware analysis. So if we know that somebody's system was infected, we can look at web downloads and see when something was downloaded and then potentially find out an infected binary or whatever. We can also look at things like in map 7.91 setup.exe. Remember, we already have evidence that in map was run for the first time and now we have the time the in map was actually downloaded and where it came from. So in map.org and then we have the distribution and then setup.exe and then where it was saved to. So we saved it to the downloads folder at this particular time. So 7.14. So now this is going to go into my bookmark. So now our timeline, we have at 5.14 the suspect downloaded in map at 5.15 the suspect installed in map and then between 5.15 and 5. I think it was 37 the suspect ran in map at least three times using PowerShell and we know that they were scanning an internal network. So now we have a couple different data sources that we can string together to tell the story about how the user was using in map and if we know what that internal IP address was, we might be able to establish the intention of the user, especially with the PowerShell history file. So next we have our web form autofill and then we have a username better cap, which is kind of a weird username, and then we have username dreammaker82. So next I would do a keyword search for dreammaker82 because that's likely one of the suspect's usernames and I want to know what they were actually doing here. Okay, and then we have web history and web history can give us a lot of interesting stuff about what people are looking for, what they've accessed. You can see that I have a web address that's local and it looks like it was probably running better cap locally, but I also have some keyword searches like hacker background, hacking tools, learn everything about ethical hacking. So a couple different sites that were accessed are specifically about hacking and then we have some hacking tools that were downloaded and they were playing with. So this could be somebody just learning how to do some hacking. We have records of accessing a local file, we know it's local because it's on the C drive, so they downloaded a file and then opened it with the browser. A couple more local files that were opened. Okay, so web history extremely interesting data source for finding out what people were interested in, what was the order that they were researching things. Usually you can see the entire progression of how they were learning about something and what they were trying to do through web history. So we have web search, this is what the user was searching on that's based on our history file. So in this case, we have edge user data default history. So the user was using edge and we're parsing out the history file to look for search terms. They were using Bing. So most likely they were just searching directly in the address bar. Of course with Bing, the first thing you're going to look for is Google Chrome or Brave browser. And then the next was how to download movies for free. And then they watched or were trying to access Stargate, Candyman, how to hack, Stego programs. Notice we've switched over to Google.com. And this is also the Brave browser user data default history. So one thing to be aware of is whenever you are looking at this data artifact view, on the very left hand side, we have history, the history file, but that history file is coming from edge. And then we scroll down a little bit, we still have a history file, but that history file is coming from Brave browser. So it's actually the user was using a different application. Just make sure that you are differentiating between those applications and not just generally saying history file, tell them the entire location of the history file that this search term is coming out of. Now from these searches, these are directly what the user was typing, right? So we know that the user was interested in Tor, we know the user was interested in how to hide your IP address, keyword list download, seven lists, seven zip download, for example. So they were experimenting around with hacking things. And that's essentially what they were searching for and what they were interested in. So hit web history, web search, web history tends to have a lot of data, web search has things that are intentional for the user. So I would usually go to web search before I go to web history, just so I have a better idea of what the user was trying to do, and then go to web history for browsing behavior instead of searching behavior. So that's everything we got from the default modules in a fairly standard Windows system. Now this system didn't have a lot of user activity, but you can see that we were able to extract a lot of things like web behaviors, local file access, local folder access, all of those things are really important for establishing that the user was doing something and they knew that they were doing it. Both of those things are really important for digital investigation questions. So now let's move on to analysis results. And these were related to things that we were specifically looking for. We did click the entropy module whenever we were processing this. So it does have encryption suspected. Most of these have a dot DB extension and any of these could be encrypted. They might not be encrypted. So it really depends on where the file is located, what it's named, whether it's going to be suspicious for your case or not. In this case, we have Windows prefetch and then we have this database in prefetch because it's in prefetch. I'm not going to worry about it too much. If I saw something in a really weird location with a weird file name like one or something like that, I might be more interested in it. So you can get a really quick overview of things that might be encrypted and then just go through and see if there's anything that you think could be suspicious. XF metadata are coming from the JPEG images and if they have XF information in the header, then we can see that information. We're usually getting things like date created, device model, altitude, sometimes GPS coordinates, device make. If you click on application, we can see the image itself and then we have the analysis results. XF metadata was detected. We are in the metadata view. So basically it's all going to show up here, including things like the date created. If XF information does exist, take a look at the hex view because sometimes it's not exactly standard and you might get a little bit more information than what the parser can parse out for you. So this is just a good way to filter and find out which ones have XF information. Get a quick overview, but I always take a look at the hex view and then go down and look at the data anyway. Next, extension mismatch detected. In the first video, we talked about extension mismatch quite a bit. So here we have an image and you can tell that the image, I bet this image is either a PNG or a JPEG or something like that, but the extension is dot bin. So if we take a look at the hex view, yep, it starts with PNG. So it has a PNG header, but it has a dot bin extension. It comes up as suspicious because we don't expect dot PNG images to have a dot bin extension. And then if we scroll over, we can see the extension, we can see the MIME type that was detected. Most of these are going to be dot PNG and they're either going to be bins or icons, which PNG and icon, that's actually probably okay. If we see JPEG and icon, it's a little bit more rare, but that's probably still okay. What we're mostly interested in is like JPEG is detected, but the extension is dot doc or something like that, right? Basically, what I would do is go through and just see if anything catches my eye. This is also where it's really good to have your NSRL hash database installed because most of these are related to Windows apps. So if you've already categorized them as known good, you basically filter out most of these PNGs and then we won't see them in our, in our main view. Interesting files. These are filters that we have to manually set up. Encryption programs that were detected is the Tor browser. So it was installed at least. And then we have privacy programs, Tor and Brave. So Brave is set up as a privacy program as well. The interesting files filter, basically you set it up on your own and say what you think an interesting file is. Encryption programs and privacy programs for Windows have been set up by default, but there's so many more interesting files. It just depends on what types of cases you're working on. Basically the same thing for keyword hits. We have a lot of keyword hits here. We have our single literal keyword search. And that's basically what we ran in the last video for cat against exhibit 001. And then we have our drugs search and our email addresses search. Email addresses will almost always give us a lot of returned data. And that's because it'll find any email address in the system. And a lot of email addresses are included in libraries and things like that. So we can find even these invalid emails like percent s at hotmail.com. So for email addresses, you're going to get a lot of false positives. If you're just searching for all types of email addresses, it can be interesting if you know like certain domains that you're interested in. And what it can also tell you are the files with hits. So maybe if we want to sort this by the number of files that are responsive, I can just double click that twice. And then at the very top, I get the top email address listed. Now I can just double click on it. And then it'll show me all of the files where that email address was detected at. Whenever you're dealing with a real case, very often the same email address will pop up a lot of times in a real case, especially where the suspect has been using their computer a lot, you'll find their email address quite often. So it can be a really good indicator about about that. So you will have a lot of false positives. But what we're really looking for with this, it are likely candidates for other email addresses, maybe email addresses that they've been contacting or other email addresses that they're using. So the top email address that's found is likely going to be the suspect or the primary person using the computer, or other email addresses kind of related to that. So this is a good way to start, although you will have to wade through quite a bit of data. I always recommend sorting the data. I kind of go to the email addresses and then the search view. This is our regular expression we're searching for. Double click on files with hits, sort it by amount of hits that you got. If it's just one to hit, it's probably not interesting. We're looking at things like 100 hits or more really will usually be more interesting. But you know, a quick way to sort through these things. So we had our drugs keyword list that we set up. And we had two keywords underneath drugs. And we found five instances of one. And it looks like this is probably related to the password dot text keyword list. We have another English US dictionary that it was in. And then we have a carved file that's probably also the password keyword list. So we can get a really quick view of what's the context of this keyword match. And then the other one had a lot more hits probably because it's a substring match. Previously unseen uses our correlation database to see if we've ever seen these applications before. All it's going to do is list the value that we're looking for. And in this case, it's the application name like Microsoft edge. So previously unseen Microsoft edge because this is the first Windows system that I'm processing. The first disk was just a thumb drive. We didn't have these applications on it. So the first Windows one we get, we get a lot of previously unseen files. But as we do more cases, you'll get fewer and fewer previously unseen applications. Now what this lets you do eventually, once you've analyzed several computers, then you will have already seen Microsoft edge, for example. And what will happen is a few programs will show up that are specifically related to actions that the user was doing, right? And then the actions that you know are suspicious, you can tag them as interesting files, and then find them in future cases, whereas previously unseen helps you kind of uncover them faster based on past cases that you've processed. So this is a really interesting feature. And the more you use autopsy, the more interesting it gets user content suspected. These are things that autopsy thinks the user generated. Again, we have a bunch of JPEG images. All of these are what autopsy believes are user content suspect. And most of the time they're going to be things like images or documents with metadata. Okay, web account type. So we have a proton mail account again, and then our local account. Remember, this local account was associated with better cap, a hacking tool. So at the top, we can see score, conclusion, configuration, justification. What this has to do with is a persona. So in autopsy, you can set up personas and that's basically linking information with a particular user identity. So it could be a email account, it could be a screen name, it could be a real name, credit cards, things like that. You want to link all of this known identity information together into an individual persona. And that's essentially what's showing up here. Personas get a little bit complicated. So we will talk about setting them up and using them a little bit later. And then finally, we have web categories. So the interesting thing here is sort by name. And then you can basically see the category search engine, web email, and there's a couple other web categories that sometimes show up. A lot of this information is what you would get in prior modules. But if you want to see the just the category really quickly, and then zoom in on the cookie file that this particular domain was associated with, then you can use it for that. So now we've gone through all of the analysis results, we've gone through all of our data artifacts. Again, these are using the default modules built into autopsy as of our autopsy four dot 19. You can add additional modules depending on your needs, you can even write your own using Python, as we were going through our data artifacts and analysis results, we categorized, we tagged a few more things. So let's go look at our tags. So we bookmark them, we have file tags and result tags. Most of our tags were result tags coming from analysis results. So let's go ahead and click on result tags, we have our result tags as the tagged item, we can see the timestamp here, and then we also have the full path to the data that we're interested in related to this. And we, if we click on any of these, we can see the details in here and the prefetch file that this is associated with. So we've tagged some of our results, and now we can build our timeline based on these tagged results. We also had a file tag from console host underscore history. This was the PowerShell history file where we can see our in map three commands here, right? So now we have enough tagged, we can build up our story about how in map was used, for example. So let's say that we want to write a report about how the user was using in map, we already have our tagged data, if that's all we need to show. Next, we can go to generate report, again, do HTML report, click next. And then I'm not going to use suspect data, I'm just going to use our windows data, because that's what we're analyzing today. And then I want to do specific tagged results, and I'm going to just do bookmarked tag results. And that will just be the bookmarked results that we were looking at today. I'm going to click finish. Okay, now our report is done generating, click on that link, and we should pop up the report. So we have a couple of different things in here, just like the first report we saw, this is my entire multi part disk image that I was processing, all of the versions of the modules that we were using, the job history, job one was our suspect data dot dd that we didn't include here, and then the windows image was job two. So in the case summary, we didn't have any keyword hits because we didn't do keyword searches there. We have our run programs, it has our three items here based on the prefetch files, basically the locations, we have tagged files, and we have our tagged results total, which has all of our four items, but no timestamp information. We also have web downloads that was tagged. You notice that the date and time was only included with one of these. So in my report, as I'm writing it up, I would be giving the date and time of all of the artifacts that I found, and then sorting them into my story in a chronological order. That's just the easiest way to understand it. I previously generated a report based on all of the case data, and this is the kind of thing you can see, we have our data source usage listed, all of our XF metadata installed programs, interesting files. These are the kinds of reports you might want to extract and then refer to specific items in the report whenever you're writing up your final report. So this is the export report from autopsy, and all of these items in the report are things that we will be referencing whenever we're writing up our final report. So for example, you can say, see recent documents, C drive users, local share, better cap UI, I would be copying this entire thing. And I'd say this is in recent documents at this particular time. The data was originally located from recent UI.link where we're getting that information. So the reports that we generate from autopsy are not like our final report that we're going to hand off necessarily, unless that's what the lawyers are asking for. Most likely, what we are going to hand off is a written report about all of the story that we've constructed, and then be using this report as supporting evidence for everything that we're claiming. So that's the point of this report. That's it for today, going more in depth into autopsy module results, what they mean, and how you can use them in investigations. Thank you very much.