 Hey, good afternoon, everyone, and welcome to the Packet Hacking Village. It is now, yes, we're in the afternoon part of the session, and I'm just going to make this introduction really quick. I think I've been doing this for way too long, and it seems like every year. I remember one of the things is that when you're here, I think my introduction to the two of you gets shorter and shorter, so here we go. I would mention my absolute pleasure. Mike Rago, Chad Hardmer. Thank you. Make sure you guys can hear me okay. Sound okay? Alright, cool. So yeah, we'll be presenting Stego Augmented Mauer, which is a combination of a lot of research Chet and I have done over the years and applied in a slightly different manner. And to really kind of preface this, Chet will talk about some of the research he's involved with a college that he's involved with, which also kind of sparked some of this additional research and really the presentation altogether. So with that, we'll go ahead and get started. So in terms of the agenda, we're going to cover a variety of different types of Stegonography Augmented Mauer. Try to say that fast five times. Stats, trends, commonalities, differences. Some of the research through the college that Chet's involved with had a lot of the students focus on specific variants of malware, and then we started to look more broadly across these different variants to find commonalities, differences that are understand IOCs and TGPs and things like that. And then as a result, we wanted to better understand how we could detect a lot of these, especially based on their behaviors. And then we'll talk a little bit about our ideas really about the future and where we think this may go next. So let Chet introduce himself. Yeah, everybody. I've been here many years, so you probably know who I am, but the focus of this talk really goes to something that I've been working on for a little over 20 years now, and that's where all the gray and maybe it has come from. So we're going to talk about Stegonography and how it's being applied to malware today, and specifically some work that we're doing at Utica College in one of the programs that I operate there and talk a little bit about some of the student research that's there as well. And I also teach at the University of Arizona as well as Utica College and at Champlain. If you're interested in any of those programs, please stop by and talk to me. I'll be presenting tomorrow at six o'clock as well, so if you want to stop by then. Turn it back to Mike. That was quick. Thanks. Yeah, like Chet, I've presented here many times, and Chet and I have collaborated on a lot of Stegonographic and Stegonalysis type research over the last 20 years or so, I guess, at this point, and have done a few books together too around covert communications, data hiding techniques, and things of that nature. And so I'd like to thank Ming for having us back again this year. So let Chet first talk a little bit about all the things they've been doing at Utica College, and then that'll really preface what we're going to go through in terms of the research and analysis. Thanks, Mike. We teach a class at Utica called Cyber 642, which is data hiding and access control. So the point of this particular course is to take a really in-depth look at what's happening and what is emerging from a data hiding and covert communication point of view. The focus of the course is to look at the latest malicious code that includes advanced persistent threats that are out there. Mike and I did a talk several years ago talking about the APT Operation Shady Rat, which kind of started this whole gamut of technologies that are used to augment malware in order to be able to make them more or less discoverable. So the whole point of this is to incorporate secondography into malware in order to be able to conceal its existence, right, so they can communicate from that perspective. So again, in recent years, there's been a lot of movement and Mike kind of walked through a bunch of those that have come out of the research that we're doing at Utica and work that we're doing together. And as Mike said, we've been studying this problem for a number of years and we've kind of looked at this from several different vantage points. So the students in this class discovered, this is a master's class, discovered and examined a wide range of secondography enhanced malware threats. We've looked at over 30 of these in the last year or so. And we want to do this in order to analyze vulnerabilities of the malware along with the secondography methods that are utilized because some of these methods actually use fairly sophisticated stag, others use very unsophisticated stag. So our interest is to understand where those techniques that they're using in order to enhance the malware are potentially vulnerable. So we can actually use those in order to either detect or disrupt that activity. So we want to develop these strategies that can be allowed to allow us to do detection, response and mitigation of the threats that are there. In several cases, students have chosen to further examine these threats as part of their final capstone or their thesis project at Utica. Michael Beatty just is completing his right now and it's just an outstanding paper that will be in public view probably in about three months when he finishes his thesis. And that thesis covers a wide range of these and the vulnerabilities that we have studied and figured out. I'm the second reader on Michael's paper and I teach the course that I'm talking about. So we spend a lot of time with a lot of different students looking at a lot of different threats that are there. So the catalyst for this presentation comes from a couple of different points of view. One, Mike in my 20 years of studying this problem and watching the evolution of stagionography being used in multiple different ways. And it's interesting that one of the things that we tend to focus on is an encryption. But less emphasis, even to today, has been spent on the study of stagionography and how it actually impacts and how it can evade detection. And now that it's being integrated with malware, it's the next level. But some of this has been inspired by the research that we've done and also the research that the students have been doing over the last couple of years in order to be able to take us to this next level. So I'm going to turn it back over to Mike who's going to kind of walk through some of those changes and I'll bounce back in a little bit to talk about what we're doing about it. In other words, how we can actually use what we figured out in order to do that. So I'll turn it back to Mike. Thanks, Chad. So taking a step back first and looking at ways in which we could categorize these and also determine, you know, their level of existence when they first came into play or were released or seen in the wild just started to build out a simple timeline here where you can see, you know, a spike in increase especially over the last four to five years. Certainly this is a very small data set but over time I'd like to continue to see how this evolves and emerges but clearly there's a growing number of these and it's on the level of potentially exponential. Of the ones that we did some deeper analysis around beyond what the students had done we started to put these into some different categories along the lines of banking trojans or crypto-jacking. A lot of this was, this categorization here was really based on their motives, right? What are they looking to do? In addition, a lot of these as you might expect are all related to remote control, CNC, remote access trojans things related to data theft or espionage. And then lastly there were some actual malvertising and ad related type click-throughs and promos to generate clicks and drive traffic to specific sites or specific ads. So one of the ones that was particularly interesting was one that leveraged social media. At a previous packet hacking village or Wallach sheep talk a few years back I had presented around a lot of these risks and threats across social because it was doing a lot of research at that time across Twitter, Facebook and a variety of other social networks also streaming media services and things like that. This one in particular was found on Twitter and to give credit this was identified by Trend Micro. The premise behind it was that outside of the scope of this one or more systems had been, let's say, compromised and infected with this malware. But where the steganographic component of this starts to come into play is that the malware is out there on a regular basis checking a particular Twitter account, a Twitter account that had been out there for a few years and whoever owns this account then started to post memes and within those memes were certain types of commands. So once the meme was posted the malware would identify it was out there and parse it and within it would be a variety of commands that the malware would leverage to run a variety of things including screen scraping or screen captures and a number of other things. And then additionally post this stuff to Payspin. And so through Payspin it would actually initially obtain a URL of where to post it and it would post it back to Payspin or separate IP or URL all together related to the CNC. Just at a high level diagram here, right, you've got the computer reaching out to the social media, it's already infected with malware and getting updates and commands via Twitter, right. Not Twitter directly but a meme posted to Twitter. And we'll talk more later about a presentation Dr. Phil Tully and I did two years ago at DEF CON 25 around neural nets and leveraging the ability to not only detect some of these things but furthermore do predictive analysis. So in this case it went out to Twitter, parsed this particular meme and other ones that were posted beyond that to gather commands. And from there it would get information such as a URL to obtain command and control stuff. I'm glad to have this on the screen because it was not only Payspin but Imgur and things like that. And then as a result the files would be sent to the remote command and control URL that was provided via Payspin or Imgur or something else. So with this were a variety of commands that were run and leveraged by the malware itself to capture screenshots of the desktop, processes that may be running, steal stuff from the clipboard, even potentially the username for the machine and documents and things like that. But it kind of begs a question here for a moment and that is if you're running an enterprise network how frequently do you have or want to permit enterprise users to be using things like Payspin? We find in most cases when doing pen testing and network analysis and things like that when looking at it from an ingress or egress standpoint that this is largely still allowed outbound for posting things. I'll jump into some more of the variants here but as sort of a stepping stone for that I'll let Chet talk about his Raspberry Pi project which he's actually, as he mentioned, going to present tomorrow as well. Thanks Mike. So one of the questions is how do we detect this? Do you care if we detect this? And so the issue is the whole point of this operation in augmenting malware in this fashion is so that it becomes not observable, right? Or the observable is very low. So how do we actually go about detecting this? Well, there's kind of two different methods that we've actually analyzed and looked at. The first, of course, is we could analyze the images that are being posted or recovered from different sites and basically determine if they have content embedded in them, right? And as I mentioned earlier, some of them are fairly simple in embedding. They may be using a JPEG that has data appending. They may be using a JPEG that actually inserts things in the header of the JPEG to basically communicate the data whether it be a PowerShell script or something like that that's embedded in those areas that we're not looking at or don't get displayed. But we can detect those relatively quickly and relatively easily. But if they go ahead and modify things like the quantized DCTs and actually embed that information in those areas of the image, then the image is going to be compromised and now we have to do more exhaustive analysis of that. And it causes two problems. One is we can miss things. And number two, we can issue false positives. Neither of those false negatives or false positives are a good thing. The second problem is that in the deeper analysis of those images, it takes a long time depending upon the size of the image. So it could take several seconds or even longer depending upon the size of the image that we're actually going to try to analyze for the content that it has been compromised or information has been embedded in it. So we got thinking about this and said the second way to approach this is to look at the behavior. We never want to build signature detections anymore because they just don't work, right? Because as soon as the bad guys know that we're doing signature detection, they just change the signature, right? And it's difficult and that's why most of these things get through in the first place. So I'm not going to talk about the compromise itself of the systems. I want to talk about analyzing the behavior of the malware once it's embedded. Now, remember, these are probably in most cases fileless malware. So we're talking about memory-resident things that have compromised the system in such a way that these processes are running with privilege. And therefore, they're able to actually move within that environment without being detected by, you know, traditional defenses of those environments. But we do have been doing a lot of work in analyzing the behaviors of those operations. And they tend to be quite similar. You might kind of walk through a scenario for you already how this actually works. And most of them work the same way. They have a need, right, to go back to the internet to a public site that is not in violation. Typically, the way we have done these in the past is we know what CNC sites that are out there that they're using and any connection goes to them. We basically block it, right? So it's a signature detection, right? We've got a signature that this is a bad CNC site. But in this particular, it's not a CNC site anymore, right? Now it's basically Pacebin or Twitter or Facebook or anything that has an image that's going to be downloaded and information is going to be gleaned to that as to what to do. And then information is going to be posted back, again, in the form of a JPEG or something else that has the content that we want to exfiltrate from the environment in it. Again, something that's typically going to be ignored. However, when we start looking at privileged applications, right, or privileged processes that are basically posting information to the internet or going to the internet and getting information, that becomes unusual. So just one simple example. But the point is the defense in depth approach here is to basically identify these behaviors and that's what we've been studying for the last couple of years to understand what are those behaviors. And so one of the things that I'm going to present on tomorrow evening is talking about this Raspberry Pi project that I've been working on as well. That's basically a passive sensor. So basically the sensor is looking for aberrant behavior within your environment that's outside the norm. And these would be things that would be outside the norm, right? Normally systems are not going to be posting these kinds of images to these particular sites or retrieving them and then processing them. So we're basically developing ML models that will allow us to identify these even if we've never seen them before, right? Because they have these characteristics that we're analyzing. So as you know, the first step in doing any kind of ML work is to actually define what am I trying to detect, right? And number two, what are those specific features or characteristics that I want to basically identify that are either good or bad? And then basically be able to build a corpus of those in order to be able to identify those behaviors in a more intelligent system kind of approach versus a signature-based approach. And that's what we're doing with the Raspberry Pi Python sensor is basically using that sensor in order to detect this abnormal inbound or outbound behavior that is being caused by this stego-augmented malware that we're in play. Does that make sense to everybody? Any questions about that? Any thoughts about that? I'll stop here for just a second because I'm going to be shooting out to go do another presentation but I want to make sure that I answer any questions that are related to either how the information is being extracted from somewhere out in cyberspace to basically or being pushed there and how this is actually working because I know we've covered a lot. Pretty simple, straightforward. Yes, sir. What do we be able to do for this activity to do that by saying as long as it's not a browser that's reaching up what do we be able to detect in that way? Sure, so that's a great example. In other words, does this particular process normally go and retrieve images or post images to the internet? Is it a common thing? Now a lot of processes do that, not just browsers, right? And so where they're going is important but we don't want to actually identify those from some signature point of view. We want to identify the behavior of when do they do them, how often do they do them and is it something normal for this process to do. So one of the things that the malware that we've looked at is they're trying to attach themselves to processes that normally do that, right, but they don't do it in the same way. And some of the mistakes that they've made is they use the same images, in order to be able to do those. So that's one of the things that we learned about that, such a great question, is if we see a process we may not be able to instantly identify it as a problem, but if I take an image that was being posted by that process and I hash it and then it posts that same image in the future and the hash has changed, what does that tell us? Something different was embedded in that image. It's the same thing in the pull down. This is how we actually detected Operation Shady Red we were seeing images that were posted and retrieved that were the same image but they had a different hash, right, so we're able to identify it in those ways. So obviously as they become more sophisticated in using different images every time to convey the information it becomes more difficult. But they typically aren't that sophisticated because they're trying to actually make that happen in a very quick fashion. Any other questions? Yes, sir. That's exactly what we're doing with the Raspberry Pi project so I kind of invite you to stop by tomorrow at 6 when I'm doing that talk but basically we're modeling that baseline and baselining the behavior of that environment based on what's happening in that environment normally. What connections are being made by what devices over what protocols, at what times of day what size packets, that kind of thing we're actually monitoring that and basically creating a semi-supervised learning of that environment under what we would consider normal conditions. And once we do that we can use that in order to be able to detect aberrant behavior that falls outside what that normal behavior is of that environment. Now one of the reasons that we use Raspberry Pi to do that is first of all it costs 50 bucks, right, even the new one and we can place it in different parts of the network so we can actually distribute this across a network that is much larger instead of trying to do it from a single point. We may want to monitor things within a certain subnet or certain area that we're mostly concerned about. I don't know if that directly answers your question but you kind of get where we're going with that. Anything else? Are you concerned about this? Is this something that's ever come up in your discussions? This is great because we've done this in the past we haven't had a lot of people shake their head yes. So it's great to hear that people are aware and that's part of our job here so make you aware that this is going on and starting to get you to think about ways that we can actually do this. Yes sir. Yeah we would place the sensors in multiple locations and like I said I invite you to stop by tomorrow because I go into that in great detail of how but yeah that's how we do it. We basically place the sensors in multiple locations and we monitor the network over a longer period of time so this is not a typical vulnerability assessment, pen test and map this is something that we look at the behavior of the environment and the critical thing on the Raspberry Pi is how do we store all that data? So, sure. Yep. Yeah we're only looking at flow okay in this particular case but it's how we categorize the flow in order to be able to turn it into something that will allow us to be able to detect from it but send me a message and I'll send you the video of how that works if you can't be there tomorrow. Okay? Okay. Alright I'm going to turn this back over to Mike and I'm going to scoot because we've got to talk at SkyTalks at one o'clock because I've got to go get set up for so you're good? Sounds good I guess. Alright guys thank you we'll see you later. Thanks Chad. So, Chad touched on a couple really important points in how we look at this right and as he mentioned when he covers his presentation tomorrow around the Raspberry Pi we'll go into this in more detail around that same time I've got a presentation in the IOT Village that touches on modeling out IOT behaviors and identifying malicious activity as well and a lot of this is really kind of tied back to connecting a lot of dots in looking at for example the previous research we were doing around social media as you identify some of these malicious accounts by which as we showed in the first example you know what was posting memes and those were being parsed for different types of commands to further enable the malware you know is that potentially an insider threat right if you're impacted by that does that internal user maybe own that particular twitter account right this was some research we were doing and upon doing so we actually found in instance that was actually the case so as we were flagging these malicious twitter accounts we actually found someone internally was logging into that account right so it was actually an insider threat and it kind of begs the question that you know with that additional context or that additional intelligence and leveraged whether it be at your firewall your IDS or something else is kind of a powerful thing it's kind of a 1% issue but arguably a 99% problem right if you've got a breach server device something else like on the internal network right the other thing too is in looking at the base lining and chat will certainly get into this in a lot more detail in his talk is such that I see all the normal behaviors of how things are normally communicating over the network or to one another and if I can find a way to baseline on that so I can find abnormalities you know abnormal behaviors things like that for example if an IOT device has been infected with Marai and I start to see different behaviors on the network it's normally just a Johnson control sensor or a Honeywell actuator that sends a signal once an hour or you know a very small group of packets you know every once in a while for a status update on a water level humidity or other types of things and it's now emanating 5 giga data there's a big anomaly there right so having the ability to baseline on that can be a powerful thing okay so bringing that back to this then looking at a few other forms of Stego augmented malware we can start to kind of find patterns and as we go through these next few examples we're going to tie that back to what's actually occurring within the images how is the embedding occurring and further more how can a network administrator someone in the security operations center find ways in which they can potentially detect these types of risks beyond how we think about these problems today sundown was a particular piece of malware sort of a kit if you will with a lot of different variants in this particular example it was leveraged as part of a website where there is a hidden iframe the iframe itself was completely white so it looked like it was part of the background but within it was a png file and embedded in that some additional information initially was planted there to allow vulnerabilities to be exploited in i.e. both related to javascript and flash there's actually three specific CVEs tied to this but if the browser was vulnerable and the user went to this particular site this png would actually be parsed and decoded to reveal a malicious URL and it would be a pointer to another site to further infect via flash the interesting thing about this is that it would then pull down malware and infect the device with a Trojan variant of Zeus and then would engage the CNC server for further theft this just kind of goes through the flow really quick but this is going to be important later on as we think about the behaviors and the flows of this activity as I mentioned user goes to a seemingly benign site with an iframe in it which is actually a png parsed to reveal a malicious URL that's going to redirect the user to a different location and upon doing that pull down some malware and in this particular case was more bank related for stealing bank information there was some great information out there on the malware trafficanalysis.net site I don't know if you've seen this great site they'll have a lot of different bundles and zips of all the files related to the exploit as well as pcaps and things that you can look at another variant related to malvertising was also involved a malicious web page but this exploited Mac fonts so if you had a mac and you were going to the site it would exploit and as a result analyze the images and how the the data the commands all of that was embedded was using LSB or least significant bit and as a result it would parse and extract that to create an actual string which would then be executed and prompt an adobe flash download and then further more was actually going to infect the device with slayer so some observations then you can see there's a lot of images out there as part of this stego augmented malware where the images themselves are being parsed for information the interesting thing about this is first and foremost it doesn't break the image format right so when I dig a little bit deeper in the next few slides into the how the images still render totally fine so it looks completely benign and it does this also without hindering the ability to render the image and there's no viewable distinctions to the user either so if it's an LSB technique or something else you visually can't see it so one of the most simple things that we've seen which bypasses much of the defense in depth is the ability to just simply append that data to the end of the image so if it's jpeg a png or something else a lot of these file formats have an end-to-file marker what's interesting about it is if you add data beyond the end of the file marker a browser you know your viewer on your desktop preview other types of things just simply ignore it beyond the end-of-file marker but all of this data is living beyond the end-of-file marker now if I was to post this to Twitter or Facebook they're going to strip metadata right they're going to recompress the file and damage you know this data they're also going to strip off data beyond the end-of-file marker and all of our testing and that was part of the testing that supported the neural network research that we had released two years ago at DEF CON 25 and in the context of least significant bit right as I break down how the image is actually rendered I've got blue, green and red and I modify the least significant bit of any of those or all of them right to allow the ability to embed information without visually really changing the file in general right I'm just modifying the least significant bit and changing the color by one single bit you're not going to visually see that difference in the actual image the majority of the time but there are techniques where you could do the next to last significant bit and we've even seen other variants of that certainly if I've changed the fourth or fifth one that's where you start to encroach upon maybe potentially changing the the viewability of the image where somebody says hey what's with all the you know the image is blurry or I see random pixels or dots and things like that so again I'm just changing the least significant bit this is all part of when when it's extracted and rebuilt allows you to actually you know build whatever it renders whether it's ASCII code the command other types of things and so in looking at you know how I could you know somewhat weaponize this you know what would be good sites or social networks to upload this to where it wouldn't be recompressed the metadata wouldn't be stripped data after the end of file marker would not be removed right so as I mentioned Twitter and Facebook both have their own compression techniques so when you upload an image or even a video it may be completely recompressed they'll strip the majority of the metadata although there was an exploit a few months ago related to that IPTC and it'll strip off everything beyond the end of file marker thus really rendering a new format of the actual image but not so the case with things such as Tumblr or Pinterest or things like that at a talk Chet and I did two years ago we also exploited streaming media services so I set up a musicians account within Pandora and actually took an mp3 and modified it and reposted it and later on it played the same song again actually on the streaming media service which I was actually surprised about I just wanted to see that I could modify an mp3 embedded in the embedded JPEG that you see on your mp3 player of the album cover or the song and sure enough it showed up and it started playing and you couldn't distinguish any difference in the music because the only thing I had modified was the JPEG that's part of or within the mp3 itself so you could essentially you know communicate that over a streaming media service which I demonstrated and furthermore you know if you go into Google Chrome and develop options you can download that song right and you know if you were communicating this to anywhere in the world you could download that you know and you know to the actual recipient they would know to download it pull out the JPEG and you know extract whatever data had been hidden within the JPEG hidden within the mp3 that was streaming through the media service itself. So when we did this presentation at DEFCON Dr. Phil Tully and myself Phil is the data scientist he's the one that's got his doctorate degree in neural nets I did all the background research and analyzing how these images were recompressed what was stripped and not from each of them and the premise behind this though that was really cool we thought was if I can better understand how these images are being weaponized and how this information can survive being uploaded could I as a white hat let's say actually model that out and leverage machine learning to actually predict a thousand other variants of the same thing and use that as a method by which I build a massive detection capability and that's what we proved out and demonstrated at DEFCON 25 oh and I forgot I put a screenshot in it here so this is up on YouTube if you're if you're actually interested in it yeah that's a great question I should actually put that on the slide it actually means that it actually survives is what it means so green means it survives so if I embedded something in an image and I post it either as a profile picture or just as a part of a post or part of an album how these different social networks handle those images as you can see here in some cases they'll actually well strip the metadata or recompress it but in other instances if it's a post versus a profile maybe they don't so bottom line the straight answer to your question is anything that's green is survivable and is not modified whatsoever yeah great question I'm glad you brought that up I'll have to update the slides yes change their policies at all and Trend Micro was actually really good about not pointing the finger at Twitter themselves right that so as a result the only thing that I know is that once Trend Micro had discovered this they had notified Twitter and shortly after Twitter took the account down was essentially that on a similar note although completely unrelated was the scenario that I mentioned where you know we're finding other malicious accounts and one of those was mapped back to the internal network because somebody had logged into it from the internal network which was kind of a surprise to us actually you know we accidentally stumbled across an insider threat so okay so last two slides and then I have to run over to our next talk over at Skytalks some observations here thought-provoking things to think about first in terms of parsing images you know as we highlighted you know it's quite easy to append data to the end of an image especially a JPEG or a PNG because it's still going to render completely fine so if I put it up on a website or anywhere else for that matter unless I'm posting it to something like Twitter or Facebook all that data is going to survive but from a viewing standpoint nobody's going to really know it's there unless they know the look for it so one of the recommendations here is that and this gets back to a presentation I did at Black Hat in 2004 which is if I know the tools that are being used to embed the hidden data rather than look for the hidden data itself does the tool leave a fingerprint behind within the image and if I map those out then I'll know that I didn't actually find data hidden in the image or I didn't know to look for it but based on specific fingerprints or things that are left behind as a part of the embedding tool that I could actually build a library of all the known tools for basically running steganographic embedding within the images and use that as my detection method instead so I had written a tool called StegSpy and released it at Black Hat way back and it did exactly that it identified 13 different types of tools that were used for steganography or embedding within an image and that might be particularly good then when you have an instance where it's least significant bit or DCT as Chet had mentioned in that that's a lot more difficult to detect although in the next presentation Chet's going to show you how we can actually detect that stuff too but that gets back to well, LSB is really difficult to detect but if I can maybe detect the tool that was used to perform the LSB embedding technique because it left behind a fingerprint that's going to be a lot easier to detect the other thing too and this is more fundamental network security and that is as we took a look at just a couple simple TTPs is this outbound, this upload this access to things like Pacebin and other types of seemingly benign sites but in the context of your enterprise security if you're responsible for the enterprise network do you really want to allow access to things like that or maybe should it actually be blocked on the network that's up to you to decide and then I had also touched on tying back to social networks of which were leveraged for some of these types of attacks and beyond the insider threat that I had spoken about if you had a feed that was alerting you to the different forms of or the different malicious accounts across social especially ones that hadn't been taken down yet because it can take anywhere from a few hours to maybe a few weeks actually before those are taken down does that give you some interesting context in terms of identifying things related to insider threat or things like that so if that was tied back to a particular feed or something like that another thing to think about so we do find that Stego augmented malware is on the rise we're going to continue to research on this and backfill that diagram we started with to kind of see how quickly this is increasing we did prove out last year and the year before how this could be exploited through MP3s and MP4s and demonstrated that as I mentioned I had not only done it with Pandora but I had done things I can't mention the other ones by name but we had done a variety of things that we actually showed in the presentation and so when thinking about this it does have a lot of applicability to audio and video formats as well and I think I pretty much covered the rest of the rest of the points so thank you very much hopefully you got some good a few good gold nuggets out of it anyways and some things to think about in terms of the new forms of Stego augmented malware and things that we're doing research around so thank you very much