 Our next talk is titled, Deep Fakes, Deep Trouble, Analyzing the Potential Impact of Deep Fakes on Market Manipulation, and it is done by Anna Skelton. So if we could all give her a round of applause. Hi, everyone. Thanks for joining me. As he said, this is Deep Fakes, Deep Trouble, Analyzing the Potential Impact of Deep Fakes on Market Manipulation. My name is Anna Skelton, and I am happy as heck to have you all here with me today. So start with a little bit about me. I studied Global Security and Intelligence Studies in a little school in the middle of the desert for nerds. I was picked up by GoDaddy initially, helped design their travel security program, and it was there that I was sent to my first DEF CON, which I went to as a complete and total infosec newbie. I met my mentor, Mike, who gently guided me towards infosec and said maybe it wasn't as scary as I thought it would be. I was picked up by a large financial institution into an infosec generalist role, have since transitioned into strategic cyber threat intelligence, which I absolutely adore. In my free time, I like hanging out in my F-150 pickup that I've converted into a camper named Harvey, or snuggling up with my Siamese kitty cat, Eleanor. So before we get started, I wanted to give a little bit of information on how I'm going to be delivering this talk. So I believe that the barrier to entry for new topics in infosec should be lower. I know that some of you in this room probably definitely know more about this than I do, but some of you might not know anything at all. So when I explain things, I'm going to do so first in a very technical way, and then I'm going to go over it again in a layman's way. In my experience, even if you're really comfortable with the material, hearing it explained in a different way can help shift and mature your perspective. So you'll find that this talk is very linear, and the way it's laid out will start with deep fix. What are they? We'll move into market manipulation and a little bit of an explanation of the markets. We'll look at past examples of negative cyber influence on the stock markets. We'll talk a little bit about my own misadventures and adventures in deepfake creation. Then we'll talk about future possibilities and wrap up by exploring some solutions. So let's get started. Deepfakes! It's quite the buzzword right now. So the traditional deepfakes model uses generative adversarial networks to exploit human tendencies. Basically, two machine learning models, the generator and the discriminator, will go head to head. The generator will develop the videos and the discriminator will point out and identify the forgeries. And they'll just keep going and going and going until the discriminator can no longer detect the forgery. So we're going to pop on over to YouTube here for a quick example of that. Get a lot ready today? Yes. Hey! The weather forecast said it's going to snow tonight. I need to do about it. Don't be so grouchy. I'm not being grouchy. I just want to finish my work. Okay, I understand. I'll come back later on with a couple of sandwiches for you. Maybe you'll let me read something then. Wendy, let me explain something to you. Whenever you come in here and interrupt me, you're breaking my concentration, you're distracting me, and it will then take me time to get back to where I was. Understand? Yeah. So I've never seen the shining, and I probably will never see the shining. But if you had told me that Jim Carrey was in the shining and then showed me that video, I would have been like, yeah, okay, like I can see it, sure. So there's another model that's kind of being developed that uses a few shot capability. Essentially, it's coming out of a research team that's partnership between Cornell and Russia. It performs lengthy metal learning on a large data set of videos and then is able to frame few and one shot learning of talking head models using high quality generators and discriminators. So in late person's terms, it just watches a bunch of videos of just people talking and then uses a few images of the actual target to then create a living portrait. Now I bring this up because it shows that deepfakes, even as a concept, continue to grow and develop, right? So we're not seeing that just what we know now as deepfakes is where it stops. This is going to continue to develop as time goes by. Now we'll pop back over to YouTube for another example of that. So that shows you how just one image of a painting done literally hundreds of years ago can be put into this format and then come out with living portraits that look like she's talking. And Washington's starting to take notice of the deepfake threat, right? So Marco Rubio recently said that deepfakes were as dangerous as the threat of nuclear war, right? So we're talking about some pretty serious stuff here. As far as the laws go, the legality of deepfakes, it's kind of across the board. So in Virginia, they recently passed legislation outlawing deepfakes as part of legislation combating revenge porn. In Texas, there's a law that goes into place on September 1st that outlaws deepfakes as used in elections. But if we just keep approaching this threat from a state-by-state legislative perspective, it's going to be completely inadequate to cover the aggressive deepfake threat. There are two pieces of legislation about deepfakes in the House of Representatives right now, but neither of them are gaining that much traction. And even if they do, we all know how long and arduous the federal legislation process can be, especially with the divided Congress. So let's take a minute now to introduce the market. So within the market, you have three different components. You have the currency markets, the equity markets, which is like the stock market, and bond markets, which are backed by the Treasury. So it would be hard for deepfakes to impact currency markets because they so intricately deal with the relationships between two different nation states. So for the purpose of this talk, we're just going to go ahead and take currency markets out of it. Within the equity markets, you have the Dow Jones, which is very banking heavy. You have the NASDAQ, which is very tech heavy. And you have the S&P 500, which is kind of a mix of everything. It's worth noting here that there is a trading curve rule, which is basically a fail-safe in place. If enough suspicious activity happens, it will automatically halt trading. That's important. We'll come back to that. So market manipulation is narrowly defined as artificially affecting the supply or demand for security. So essentially, this is manipulating, for example, a stock price using misleading information about a company or an individual or really anything at all. We can assume that some level of market manipulation happens every day, but deepfakes exacerbate the scale and risk of the damage to the market because the market is so volatile. Today, we'll be looking at this from both a micro and a macro level. So at a micro level, we'll be looking at just impacting individual stock prices. And at a macro level, we'll be looking at the potential for deepfakes to cause a serious and significant domino effect that causes sophisticated damage. I'd also like to point out this vaguely unsettling photo that comes up when you Google image market manipulation. Every time I look at it, it just makes me kind of uncomfortable. So taking a quick look at past examples of cyber threats impacting the economy, you might remember in 2011 when the Associated Press's Twitter account was hijacked and a tweet was published saying that there had been an explosion in the White House and the President was injured. Immediately, the Dow Jones plummeted and the S&P 500 reportedly lost $136.5 billion in market capitalization, which is the value calculated by the number of shares divided by the price of those shares. As you can see, it quickly bounced right back up, but in this case, a lot of the damage was caused by computerized trading algorithms that monitor social media and news sites and then adjust stock prices based on predetermined rules. You might remember in April this year when there was a huge run on the Metro Bank in the UK after a WhatsApp rumor said that the Metro Bank had gone, was no longer liquid and people were running into the streets, literally standing in line for hours waiting to pull out all of their money from this bank. It was all completely a rumor, but this one turned physical. A lot of people who were leaving the bank were robbed of literally everything they had. And even right now, there's deep-faked audio in which executives, attackers are using the voices of executives of large financial institutions to basically bully lower-level employees into transferring money around into not-so-savory accounts. Already, it's been reported that this has taken $3 million and that was just at the end of June and that's just what has been reported. So you have to imagine that realistically, that number is a lot higher. So I'm going to take a second here to talk candidly with you guys about my own adventures and misadventures in deep-fake creation. So when I started this project, I assumed the barrier to entry was like here, like so low that literally like any Joe could like walk up to a computer and be like, I made a deep-fake, right? And I was wrong. Now that I've gone through this process, I would say it's probably somewhere more around like here, right? So it still is an accessible technology, especially if you have the time, the patience, and even just like an inkling of technical experience. I still think it's a very viable threat vector. So within my own research, I uncovered two schools of thought. There's DeepFaceLab, which is an active GitHub repository and FakeApp, which is taken offline in early 2019, but is still available. Interestingly, both of these have Windows dependencies and they both rely on Nvidia graphics cards. So I learned that if you're using VMware, the VM cannot access the graphics card of the computer you're using, which was interesting. So you'd need to have a system that first of all is running Windows OS, and then if you don't already have an Nvidia graphics card, you'd have to use an external graphics card. That was actually really surprising to me. I also ran into a couple of fun legal issues when I was creating my DeepFakes. I got yelled at by a corporate lawyer and he said, you can only make a DeepFake of yourself. And I said, lame. Like, I could literally be like, I could make a video saying like, I really don't like Parmesan cheese. And then I could stand up here and be like, I didn't say that. I would never say that. So I kept pressing and eventually convinced him to let me try to find somebody who works for the same financial institution I do who is willing to give like written legalese permission that I could make a DeepFake out of them. Some of you may recognize this guy on the screen here. This is David Mortman. He runs the CFP for Beside Las Vegas. He's been around forever. Super cool guy. And super willing to let me make a DeepFake of him. So that worked out really well. So as you can see here, these are my David's note Kim Kardashian, right? But it was a very easy Google search to go in and find two videos to extract from that feature him from the head and shoulders up with nothing in front of the face. And actually this one down here would be considered subpar because it does have the microphone. But look, I was just taking what I could get. So diving a little bit deeper into the two different schools of thought, DeepFaceLab, like I mentioned, Active GitHub repo, it offers three different packages which are dependent on your graphic card specs and they're available for download on Google Drive. It's actively updated. Last one was just in July of this year. And it's a lot less user friendly, right? So as you can see from this very blurry screenshot here, you're essentially running your own commands on the video. So it's a much more manual process and that I think raises the barrier of entry considerably. The YouTube guides subpar, right? So apparently the guy who runs DeepFaceLab is just one guy in Russia and his English is like non-existent. So he epically Google translated his entire read me, which as you can imagine is exceptional. Probably one of my favorite parts of this process. But for instance, you can see down here kind of DeepFaceLab created on pure enthusiasm one person. Therefore if you find any errors, treat with understanding. Like I love this guy already. But as you can imagine, his YouTube video that he made to guide everybody through the process doesn't have any audio. So you can imagine if you're coming into this as somebody who's not used to doing, following along with technical videos that are moving really fast, it would make it difficult. There are some of course where somebody has made audio for it, but I was overall not very impressed. And then there's FakeApp. So FakeApp was taken down in February of this year. It was originally hosted on fakeapp.org which has also since been taken down. Last published version is version 2.2. The majority of the issues I ran into with FakeApp are around this point right here. It has considerable dependencies that are very specific. So like I said, last published version, February of this year, all the dependencies it has require the exact software that was up to date in February of this year. So for instance CUDA, it likes it, it needs it. CUDA is now on version like 10.1.3 or something like that. And FakeApp requires CUDA version 9.7.1 with a specific patch set. It's really annoying. Also Windows Visual Studio 2015, exclusively. So you can see it, which is weird, right? Because you have to have a Windows license to get older versions of Visual Studio. So that's where I ran into the majority of my issues. However, YouTube videos, impressive. It's just a guy slowly walking you through everything step by step, very easy to follow. And some of his videos rack up like 400 to 500,000 views and they're still getting views today, which means that I'm not the only one out here trying to get into FakeApp. The application interface is extremely easy to use, especially when compared with deep face labs, right? So you can see over here, you essentially just direct it to where you want it to pull the video from and then you click extract. And then you go over to the video you wanted to train from and then you click train. And then you get over to this tab where you create it and you click create. So it's a lot more user friendly and even the application interface is so much more friendly than manually running the commands yourself. However, there's another issue with FakeApp and that is the forum. Okay, so apparently I need to figure out how to lower my voice to say that, like more intimidatingly, like the forum, something more like that. So apparently when fakeapp.org was still online, it had a page which was a forum where you could go and ask any and all of the questions that you ran into when you were using this. And I guess it was probably super helpful. So now any time that you run into an error on FakeApp, it sends you this very polite message. Check the end of the log text, log file for details and feel free to post it on fakeapp.org slash forum for help. And you're like, no, I can't. So these are the two errors that I ran into approximately 50 times each just trying to get through the extraction portion. Interestingly, there are Reddit posts on both of these errors, but it's literally like one guy being like, hey, I have this error. And then another guy comes in and is like, dude, me too. And you're just like, come on guys, like where'd you figure it out? And then you remember that they probably used the forum. So overall, FakeApp worries me way more than DeepFace Labs. So it's way more accessible. And what I'm concerned about is another FakeApp essentially coming out that makes it accessible again. So it's not online anymore. You have to go look for it. But whatever comes out next is probably going to be even more accessible and more easy to use. So let's take a second to talk about if you were able to create a really high quality sophisticated DeepFake video. What could you do? So earlier I mentioned the trading curb rule. It's not enacted often, but it is enacted. So the last time it went into place was in China in 2016 when Chinese stocks fell 7% within 27 minutes of opening. So in order to get around the trading curb rule, you'd either need to be sneaky enough to just slide right in under it or you would need to create enough impact and enough damage that it just didn't matter at all. Even if it was triggered, it wouldn't matter anymore. So to slide under the failsafe, you could perhaps make a DeepFake video with a super negative sentiment about a financial institution and then pass it around even just on Twitter, even just among Twitter bots when the trading algorithms connected to the Twitter API to pull the data to adjust prices based on predetermined rules, it would see that negative DeepFake sentiment and potentially impact the price of that stock. So that's a way that you could do it where you're sliding in right under that benchmark rule, hopefully not raising too many red flags right off the bat. If you wanted to cause a lot more damage, though, you could take a slightly different route. So let's start with the Dow Jones since it's very banking heavy. So say you release a DeepFake video of a CEO of a large corporation, it's high quality, and he says, yo, my firm is no longer liquid. We saw in the case, probably not like that though, we saw in the case of Metro banks that this immediately takes effect and even just liquidity issues on their own are enough to cause people running into the street, pulling out their assets and causing lasting enterprise impact to that financial institution. That's just if it doesn't take hold in the stock market. But let's say it does. So next, you could, so you've already affecting the Dow Jones, the next thing you could do would be to release another video, maybe even using the same CEO blaming a specific tech company for the damage caused by the first video. Now you're affecting not only the Dow Jones, but also the NASDAQ and the stocks that reside there. As you can see, it would not be hard for this to quickly spiral out of control and the trading failsafe rule wouldn't even be able to stop it. So let's talk about if you wanted to affect the bonds market. So this would be a longer shot situation because the bonds market is backed by the Treasury. You'd need to call into question the ability of the United States to pay off its debts. So this would be perhaps a deep big video, the Chairman of the Fed or of a higher ranking member of Congress. And what you're looking to do here is essentially pull down the belief of other nations and even of our own nation that we can pay off our debts. And this might not sound too out of place, especially when you consider that the 2019 budget renegotiation for 2020 are coming up this fall and they're already raising a significant number of red flags, especially with the extended government shutdown that happened last time we did budget renegotations. It doesn't seem out of place to think that some damage could seriously be caused with a high quality, sophisticated deep fake video. So let's talk about solutions. So by nature, deep fake technology exists to never be detected. It exists to make itself good enough, as does many other AI concepts, that it can't stand apart from reality. So in that way, you really need a short term, a short latency solution. But let's start by discussing the longer latency solutions just because those are actually a little bit more built out. So the University of Rochester has recently released a study saying that they can use integrity scores to essentially grade the videos that you see. So it would color code your browser, it would be a browser attachment, and it would color code the video based on how much of a reality it really reflected. There's a startup called Amber Authentication, which is using cryptographic hashes to discern deep fake videos. There's a very vague startup called New Knowledge that for the low, low price of $500,000, says that it will protect your company from the spread of misinformation, but that's as specific as it goes, and I'm convinced they're not really sure how they're going to do it either. And of course, we could not get through a talk about deep fakes without talking about blockchain. There are several different companies, FatCom is one of them, which is using blockchain to discern between deep fakes and real videos with varied levels of success. But like I said, these all need videos to be online for a longer period of time in order to actually be effective. And the market's so volatile that we need something that is a short-term solution that can immediately come into place to stop that damage before it happens. So, as far as I've found, the best option we have right now is just to monitor for the development and release of accessible deep fake software on both above and below ground markets, looking for the next fake app, looking for the next thing that's going to allow anybody with the time and the patience to sit down at their computer and create a damaging video. Ultimately, in order to reach these solutions, it's going to need a lot of different aspects. Human review, collaborative research, collaborative sharing platforms, a whole slew of things just to be able to start approaching the threat. But I think we can do it and honestly, we really don't have a choice. So throughout this project, I was definitely too blessed to be stressed, and I was stressed anyway. John Seymour, who introduced me, was my B-Sides LV Proving Ground mentor. He's awesome. David Mortman, of course, let me use his face, which I appreciate, and all these other folks were just super, super helpful as well. At the bottom here is my Twitter handle. It's the best way to reach me for questions, comments, qualms, or general existential tidbits, any of all these things. And that's my talk for today, so thank you all for joining me. Questions? You. No, I haven't. So one of the big issues that's type of solution is volume issues. So if you can imagine trying to apply that to every video that exists or get the right metadata to accurately reflect that and apply it to all the different videos that exist, that's where I see that potentially running into complications. I haven't done any research on that, but that's definitely something to look into for the future. Anybody else? So that's a really good question. On that topic, I am a glass half empty person. I genuinely believe that deep fakes will continue to evolve faster than our detection mechanisms can evolve to keep up with them. Yeah. In the back, I have not looked into that. I don't know what goes into making any sort of chatbot, but definitely food for thought. So I was not successful in making my deep fake. I did run out of time. I definitely should have started earlier. And I definitely did not have the right technology going into it. Like I said, the windows dependencies and the Nvidia graphics card dependencies are weirdly of like big roadblocks. But I spent probably 60 to 80 hours working on it. A large part of that was just weird troubleshooting and making sure I had the right technology. If you walked in with a Windows computer with an Nvidia graphics card, you could cut off 60 of those hours. In the red shirt. That's a good question. I would assume to some extent yes, but like I said with that guy's question, at that point it's a volume issue as well. And another point to bring up here is even if you can detect that a video is fake, if it's already been seen, does it even matter? Yes. I think we know with FaceApp being sent directly to Russia and this one guy out of Russia making deep face labs that they probably have their own deep face software. That's actually a really good question. So when I originally started this project, I had to decide if I wanted to look at it from the perspective of a nation state or from the perspective of perhaps a lower technical capability hack to this group. And I ultimately decided to go at it from the lower capability hack to this group. I deal with nation states in my line of work now and I have learned that you really can never exactly guess how capable they are and they're more capable than you think they are. So yeah, that's how I chose the scope to do a hacktivist but yeah, I'm sure they have their own technology. You and then you in order of seat row. So they're using, the traditional model like I said is using the generator discriminator, the traditional adversarial model. I don't know a lot about module encoders so I couldn't. Oh, auto encoder. I thought you were saying module encoders and I was like, what is that? Okay. Right. Yeah, I think to some extent the technology is similar but I mean I literally came into this knowing zero things about deepfakes and all I know is what I learned. So yeah, still still learning always. Yeah, absolutely. I mean I think that's just, is that the next step? Do they commercially release that to the public or is it like you know when they recently released the blue key POC and everyone's like don't do that and they're like oh it's only for people that subscribe but yeah, I mean I think it's just waiting to see if that's, if they're going to release that software. I absolutely believe, I think we know that they have the capability to develop it but yeah, if they'll make it commercially available is the question. Hopefully not. Okay, I don't see any more hands so I'm going to go ahead and say thank you for joining me today and I hope you enjoyed it.