 Okay, continuing working with binary files and today we're gonna be looking at a packet file that was captured from doing a network sniffing either with wire shark or TCP dump or ether cap and if we just cat out this file, first off I'll list off there's one file in here we can see that it is about seven and a half megabytes and if I cat it out you'll see that there is definitely some binary data in there control C to stop that if your terminal is messed up you can always type reset and that should reset it for you because sometimes that happens when you cat out something like that okay so a lot of binary stuff in there that we can't really read but there are strings in there that we can read and we can use a program called strings which is most likely already installed on your system if you're running linux and we can string out that basically is going to show anything that is a printable ASCII character boom lots of stuff in there and if you glance and you can see stuff that says google and other stuff you can even see some printer information from my network printer oh that's nice okay so I'm going to control L to clear the screen there so we saw that so now let's start sorting through it we this is sort of the process I go through next thing I would do is probably sort and unique that to remove any duplicates okay and then I'll start looking for things that that might be URLs so well again this is something that you have to kind of know what you're looking for or you just poke around so you find what you're looking for so I'm going to say look for any line that has HTTP and I'll say dash I so it's case insensitive and okay so we got some stuff here's some partial URLs and I know right away just based on this because I'm familiar with this because I've worked with it on my website which is actually what these links are probably from when I was sniffing is that these are YouTube images for videos so let's go ahead and try to find some of those so what I'm going to do now is I know that they're all going to be labeled hqdefault.jpeg so I'm going to cat out the cap file and then I'm going to grep for that okay we got a bunch of them great and we can see some of them you know they all have they get at the beginning here they all don't start the same so let's trim it up a little bit so what we'll do next is we'll say use said and again there's different ways to do this you use regular expressions stuff like that you can use awk I'm just going to use said and I'm going to say find all these get space lines and make them new lines so if I do that you can see everywhere that there was that there's now an empty line basically for the most part so now I'm going to take that thing I say use grep again and there's probably a more efficient way to do this but we're going to say grep the carrot symbol which means find lines that begin with and we want the lines to begin with forward slash iv but forward slash is a special character so what we're going to say is we're going to say backslash forward slash iv okay so now we're only getting those lines but we also have extra data over here that we don't care about so basically they're separated by space there so we can separate them out by columns using awk so I'm going to take the last command just hitting up arrow again to get the last command and I'm going to put this into awk and this is all stuff I've done in previous tutorials we're just kind of using them all together now to accomplish a task and what this is saying is use awk and from the line that was piped into it print out just the first column there we go so we remove that end thing now again there might be repeats in there we can say pipe it into wc-l for list and for line I mean and we can see there's 135 lines let's go ahead and at this point sort dash unique that and we'll count that out as well and really there were no repeats but if I had gone to the same site multiple times there possibly could have been repeats so sort unique not an early necessary in this one but now we're going to say while read so we're going to read each line and we're going to create variable called jpeg we're going to say do w get dash c and we're going to say oh I kind of skipped a step here something that I already know is that these type of images start off with http colon four slash four slash i three dot y t imgs that's youtube image there i three server dot com forward slash doll sign jpeg and we're looking to how you get that if you didn't know that there's ways you can poke around the file a little bit more and we're going to save that now they're all called hq default dot jpeg so if I just download it as that it's going to name that well I guess w get I think numbers them but let's just give them a random name each because the names don't mean anything in this particular case so let me use the random variable dot jpeg I'm gonna say done I hit enter and no such file directory oh let's make a directory to put them make directory jpeg now we'll do that and there we go we're downloading all those images now of course you see these URLs in the sniff packet if they are private links you won't be able to get them this way and momentarily I'll show you how to actually pull the images out of the sniff packets because right now we're just finding the URLs and then pulling them offline so two different ways of doing things if you can actually pull images out of the sniff package there are some advantages to that and other advantages doing it this way again if it's private site and you don't have a username and password you may not be able to access it this way let me open up my file browser here to the jpeg folder and there you go you can see that they were all youtube images from my website and again I knew that they started off so I had a little foreknowledge on that there are other things you could do sort of like if we go back to just hq default well if I could type today we can again a we'll say two no say five and dash b five so let's say find all the lines and say uh hq default and find print not only that line but five lines after and five lines before that so there we go and here right away we can see that and we can see the host so that's where you would find if you didn't already know that that's where the image is hosted at and you can add that's the url as we did in the little one liner we did there a little one line it was a big one liner you can also see that it was called by this reference which was my site films like chris.com it's a little more information there you can see the user agent of who was browsing it I was using linux because it was me and I was using mozilla so I don't know it looks like I was using well chrome safari I'm not really sure but we do know that I was on linux I was either using chrome or firefox or really ice we's all the same thing okay so that's one way to pick through the file and of course you know there's a lot of other stuff you can look at in there again here we're looking at that we could also just say jpeg and we can look at other jpegs as well so here's a jpeg hosted on tinypick.com it's another website I went to when I was capturing this you can see it was referenced of this site here so I can right now I can just right click and go to that site but if we wanted to script it out we could do a similar thing as I just did with youtube pictures but let's go another route now another way to pull out information from a cap file is uh I think I mentioned in a previous video uh using photorect recover files from a hard drive that have been deleted I tried I know of a way to get images and other files out using wireshark which I will show you at the end of this tutorial I've been trying to find a way to do it from a script so this is my thought process here I use photorect which is part of the test disk package it should be in repositories and if I say photorect dot uh the name of the cap file net dot cap hit enter and I'll just proceed through this just by doing all the default stuff very small file so it didn't take very long and now if I list out you can see there's a recap folder and if I thunar thunar is my file manager into there you can see we got some text files which is not going to show us anything more than uh the strings command will probably show us more than that so that that wasn't very helpful that was the first thing I tried next thing is a program called foremost it which should be in your repositories foremost net cap and I'll hit enter and it went through again it's not a very big file so it didn't take very long it created an output folder so if I go thunar output you can see it pulled gifs html files jpeg files put them in separate folders and create an audit file so I'm going here to the uh gif folder and you can see we have some gifs here and open that up I would open it up in a web browser so we did get those there are some files that didn't pull out right uh it's corrupt sometimes that's just because things get lost when you're sniffing maybe they didn't get fully downloaded and maybe the sniffer didn't capture them but we got a good good little decent output here uh html once again text we can get that stuff with strings but let's look at the jpegs because we know that there's a bunch in there just from youtube and I also went to tinypick.com so go in there oh okay so it kind of pulled them but most of them are corrupt my assumption on why this happens is that they're they're fragmented they aren't in one place in that file so it finds the header for an image starts to copy it but then the rest of the file is somewhere else in the cap file because the the network is downloading all these different images and so they get put into the cap file in different places so I have yet to find a really good way from the shell to in a script pull out images and other files unfortunately if you know of a way please let me know but now I'm going to show you a way that definitely works but it's not uh scripted out it's using wire shark which you could use to capture packets but it also is really good so I usually capture packets using ether cap uh but net uh sorry wire shark has a lot of great tools for looking through and manipulating and pulling stuff out of uh capture packets so I open up the net cap file or whatever the name of your cap file is pcap or whatever and I'm going to go up here to file objects http and here it's going to list all the files that it found and right away you can see all the hq defaults from the youtube server so what I'm going to do is again let me just go over that file export objects http let it find all I'm going to say save all I'm going to do here I'll just do I'll just create a folder called sharks and go up your name don't create a new folder what it's doing is when you click okay it's going to create a folder called shark it's kind of confusing because if you create the folder called shark and then go into it the okay button is going to be grayed out just type the name of the folder you want click okay it's always going to say this I've never had it once not say this some files could not be saved and again I'm assuming that this is because when you're capturing packets you stop the capture in the middle of stuff being downloaded the end files are going to get corrupted I'm assuming that's what that is don't worry about it I've never seen it not say that I'm going to close this out now and again now I can use again my file manager whatever file manager use to open up the folder shark and there we go we've got the images these are all images that were captured in the network capturing so you can see all the youtube videos and videos from tiny pic which I just went to and saw their recent uploads just to capture stuff glad there's nothing too inappropriate that I'm seeing in here as well as html files and and basically any file that it saw j javascript files so that is the best way that I know of to pull files out of a capture packet if you know a way to do it through the shell that works that you can script out that's great but if not wire shark is great at doing that so that's it for this tutorial I hope that you enjoyed it I hope that you enjoy all my tutorials please visit my website filmsbychrist.com that's chris the K there should be a link in the description also if you enjoy my videos consider becoming a supporter by going to patreon.com forward slash metalix 1000 there you can become a supporter and get rewards for being a supporter and also get a lot more input on what type of videos I make I hope that you did enjoy this and I do hope that you have a great day oh also I'll put the uh notes to all this in a link in the description have a great day scrap what I had been working on and start from scratch usually using Babylon J s both are great platforms I just found that Babylon J s more suited what I was going for with this project is still in the very early stages a lot of work needs to be done and a lot of changes will be made if you enjoy my tutorials and would like to see more please think about contributing to my patreon account at patreon.com forward slash metalix 1000