 transformer, that's already efficient. We just get rid of all the one by one steps. But unfortunately, we lose position of the word. When you pass one word at a time, you could say first word, second word, third word, fourth word. If you pass everything, you can do that, right? So what transformers brought to the conversation was positional encoding, where you can have the embeddings of the words at the concept of where they actually need to be. And that becomes a position aware encoding, for example. Something simple, just get the word, get the numerical value of the word, understand the semantic for the word, understand where is position, and that's what we call position aware encoding. Something that I did not know what that even was, and maybe there is, of course, more context to this, but I don't wanna have a headache going through all these things. Self-attention, just the basic concept of attention is that when you have a sequence of words, you can actually understand how one word can be related to the rest and how strong their relationship is with certain words. That's the concept. And transformers actually do that on their own. This is another example, where when you translate English to French, by the way, don't ask me to say the French word here because I'm not that good at it, but you can see how in the two English sentences, the word it in the first one meant animal or refers to the animal and the second one refers to the street, found a mistake in, yeah. Yep, so you can see how this model allows us to understand the relationship with other words and without needing to have all this one by one, processing and large amount of memory, it actually could understand this. So transformers now are becoming a thing, not just for text. It was built once again for translation, but now the architecture and the concepts are being used by more than just text. So it's becoming this cross-modal general architecture. So when it comes to text, we can talk about large language models now. In a large language model, it's a neural network that, for example, could use a transformer architecture, process large amounts of data and be able to produce parameters. And those parameters are the ones that would help us then to understand, for example, statistical relationships in a language and this would be at scale with a lot of information. So what interesting part in here that we're missing is, wait a minute, but so far you're talking about training something, but you were talking about that you need to somehow know the actual target value so when you predict something, you can tell it what it was wrong. When we talk about large amount of data, you cannot just do that because you cannot label every single thing that is out there in the internet. So there's this concept of self-supervised learning that allows the model to actually learn based on the information that is processing as an input, but at the same time, it's actually, for example, trying to predict a word after a sentence is trying to mask a few words and that's how we start learning on its own, for example. So now when we talk about training, pre-training, it's very important to understand as well, you can have a model that would go through a lot of data, a lot of money, a lot of computation, a lot of so many days, months, and you will get from a large language model perspective, you will get a pre-trained model, something that is just there, has a lot of knowledge of the language, but you will have to actually tweak it, fine-tune it in order to be a specific, to be useful for a specific or a special action, for example. You're gonna still use it for certain things that I'm gonna show you in a little bit, but if you want to be very specialized and maybe reduce the amount of errors that this base model, pre-trained model, now can have, you have to fine-tune it. And that's what we mean with pre-trained and fine-tuned. So think about this for a second. Base large language models, like for example, GPT-3 and 3.5, when you tell it to predict the next word, it will do it for you. But if you actually ask it something directly, it will get a little confused. It'll be like, wait, you're asking me for similar questions? I can do that, I can just provide similar. You will have to fine-tune that in order to be able to hold that type of conversation. So there is a concept of instruction-tuned LLM, or large language model, that it's a pre-trained model that you can actually fine-tune in order to interact with you as if it was actually a conversation and understanding instructions, right? So GPT models, now that we understand some of this, are a generative pre-trained transformer model. So I hope that that's clear now, what GPT is. So rather than telling you what it is in the first slide, I hope that the explanation kinda helped a little bit. And the concept here is that this model was trained to predict the next word, went through processing a lot amount of data, and then the interesting part in here, of course, is transformer base, but it's the concepts in here now, I hope that they make more sense. It's 175 billion parameters, 96 hidden layers, so layers of neurons, and it was trained on apparently 500 billion words or tokens is what I was also reading. This type of model is what we also called a foundation model, because it's a model that now you can use, you can adapt that model and you can perform other tasks or you can fine-tune it as I mentioned before. And the concept of foundation or general models means that before we had all this knowledge about language, there were specific models that were being created for one specific task. You would collect labeled data, not large amount of data, but you would collect enough data, you would build a model, you would try to solve the problem. Let's solve phishing, let's solve ransomware. So you build specific models for each specific task. A foundation model because it has this knowledge, foundation knowledge, it could actually be super useful to then be able to be fine-tuned with some small amount of data as I showed you before and then be able to even achieve better results than a regular task model. If it doesn't make sense, let's talk about my dog Stevie. So Stevie, using mid-journey, now it becomes that cartoon. So think about Stevie, my dog is a foundation model because my dog actually knows, it could be adapted, it has enough knowledge as a dog of the things that it could do, sit, come, stay, right, all this stuff. But if I wanted to be a special specific task, I need to fine-tune Stevie. And if I wanted to be a rescue dog, it's not as easy as say, come, sit, right, stay. It has to go through a really, more than just trying to say a word and then expect the dog to do something, right. So that's the concept of foundation models, that's what you can talk about it if somebody wants this to be explained in other terms. Now these models, as I mentioned before, because they do have enough foundational knowledge, you can adapt them to do things such as tell me based on your knowledge, a poem, write something for me. You can adapt it without fine-tuning and that is changing the way how we think about traditional ML. Now it has turned into prompt based ML. A prompt is something that you ask a query that you run against the model in order to perform a task, right. So that's pretty interesting. How we went from the typical to now, if you're a prompt engineer, that's gonna be a thing as soon, a job about prompt engineering. And out of the box capabilities of these type of models, you have to understand that exists the understanding of language and the generation of something, text. Because that's how you're gonna start categorizing what use cases you can actually use it for. On the top, you're talking about understanding code, for example, understanding commands that command lines and you just wanted to tell you exactly what's going on. Sentiment analysis, what I showed before, a phishing email, you can use these models to provide that type of task. On the bottom, the generation piece, you start talking about text summarization, a bunch of incidents, summarize that, maybe summarize it with this other incident stack and tell me what's going on right away, right. This also helps to dialogue systems as well, right, because you can start going back and forth and create some type of conversations. Text generation, because you can tell it to write things for you, and text translation also. And this applies not only to English, to French, that type of translation, you can actually have language to code, for example, like SQL queries, et cetera, right. So now that we went through all of this, this is all the foundations of what the basics of these models are, starting from neural networks. We now know about GPT, some of the basics of GPT as well, some of the capabilities. So what can we do with something like this? So my goal is to share some of the experiments that I did to be able to inspire you, hopefully, so you can do your own. An idea here, we can just jump directly to the code. So the way, oh, yeah, okay. I put together this Jupyter book, which I'm gonna be releasing today. This is not complete, this is just the first iteration. And in here, what we're trying to do is show you that the first step, and hopefully, you guys can see it? Yes? Is that good? Okay, maybe I'll open do one more. Maybe that, that, all right. So the first thing we need to do is import our libraries to start interacting with some of the companies, like OpenAI, that allows you to use their GPT models via an API, right? You can also use other models as well, but let's just do OpenAI for now. You need to define your OpenAI key, like set it. I usually set it as an environment part of a .EMB file, and then I just use the .EMB in order to load those environment variables. And then I set it to a variable. So that way, I don't disclose my key while I'm talking to you guys, right? So the next step is to create a function that allows us to interact with this API. And the first thing we need to do is define the model. So now we know that we're dealing with GPT models. GPT 3.5 Turbo is the model before GPT4. 4 is the current one used by some applications, like this chatbot that exists out there. And 3.5 is the one where we actually have enough documentation. Now, so we define the model. We also want to pass messages to this call, and then we have the temperature. Temperature tells you can tell the model how creative, how much randomness you would like to add to the responses. If you set it to 5.67, that might start hallucinating a little bit more for you. So the first concept would be simple, something very simple. Let's start interacting with this. Let's create a prompt and say, classify the email subject text below. The key in prompting is you have to be very specific. So I want it to be between those three single quotes, and I just give it that name. Right away, this one says, goes and says, this is benign, this is just an email subject, text appears to be legitimate request, whatever. But now what we can do is implement concepts such as few-shot learning, which is how you adapt the model a little bit to be able to respond in a certain way. So in this case, what I'm saying is, I'm gonna give you a few examples. So few-shot learning will just provide context before it tries to ask the question. So I'm giving it some subjects, and that's my spell, by the way. And then I give it a label, and I say, look, if you see this, this, this, it's malicious. If you see something like this, it's benign. If I run this again, it tells me, oh yeah, that's malicious, by the way. The same subject, right? The same email subject as the one before. This basic exercise, that's the concept of be able to adapt the model to do something else that probably it couldn't do before. And if you start interacting with this, of course, this is a basic example, but just imagine this at scale. You can build an email classifier, right? With a foundation model like an LLM. The next one is gonna be summarizing, so let me just make this bigger. Same concept, import libraries, get the key, get my function to interact with OpenAI, and then I give it another prompt. So in this prompt, I'm saying, hey, and this is also, let me make sure that we can, yep, there you go, that's much better. We can actually play a little bit with the prompts. We can actually have the text that we would like the model to actually process, right? To tell me something about. And then I can create an initial prompt where I can actually tell it, your task is to do this. You today are gonna be a cybersecurity professional, specialized in reverse engineering, for example. So you can actually provide some of that context and because the model has this foundation and knowledge of language, it will be able to provide some of that context in the responses as well. So in here, we pass the evidence and then it's gonna tell us, oh, the actor is disabling Windows Defender, is deleting files, is doing all this stuff. It's missing a few things in there, but that's okay, just for the example. But you can see how powerful it could be for security analysts that don't have much of that, probably knowledge because they're juniors, right? Probably they are coming from a different background, different field, so it would be super helpful, right? And the last thing that I wanna talk about, and hopefully we still have enough time in there, is this last experiment that was super cool that I just was thinking, what else can I do with this? What else can I do with this LLM? Can I maybe start asking questions about my own data? Because the model will respond up to the date where it was trained. All the knowledge, all the things that it knows is up to when you finish training. Whatever happens after that, it won't be able to answer. We can actually ask, hey, how is NorthSec 2023? And it would be like, I don't know, I finished training in 2021, so I don't know what happened in the past two years. That's exactly what you will get. So in this scenario, we're gonna use the concept of retrieval augmentation where you can actually provide also your data and now be able to ask questions based on that data. And that's also how you can adapt the model a little bit to be able to interact with knowledge that doesn't exist anywhere else. Now, granted, this example is using APIs, so some of the data is going out, but this is just an example, right? And in here, the interesting piece, as I mentioned before, is that the world exists at a static snapshot, and then the idea behind is actually to be able to ask a question, and maybe this next image will make more sense, but you ask a question, you start understanding how similar your question is to the knowledge that you have in a vectorized database, and then you can pass that context to the model. What does that mean? So what I did is, I wanted to replicate what it would be in a threat intel team, in a company that has intel, and then it has all this knowledge about the reactors that you cannot share with anybody else because it's your intel. Maybe you wanna share some, yes, with other partners, but there is some knowledge that you cannot share, right? So I was thinking, you know what? What public knowledge has some of that context in the world? Mitre Attack Groups does have the type of information where they collect information about groups, public information, and they map all this knowledge to all the groups, so there you are. That's my intel database for my POC, so why not do that? So what I did is I have this Python library that I wrote. It's a GitHub project that allows me to query data from Mitre Attack in a sticks format. Mitre Attack does have a database. It's a taxi server database where you can just get the data. So I wrote some code with this library. I create documents. I tokenize my data. I hope that tokenize now makes sense, right? And then I create the embeddings. I hope that also makes sense again, right? Of every single document. So we tokenize embeddings and then I put it into a database that understands numerical values that will represent a document and the semantics of the document, right? So now when I run my query and I say, hey, I would like to know more about these threat actors that exist in Mitre Attack, but if I go to the Mitre Attack search capability through the website, all I can do is keyword-based search, right? All you can do is type phishing, process injection, discovery, and then it tries to find the documents, but what if I wanna ask what are the top phishing techniques or what are the most common phishing techniques in your whole database? The whole website goes down, right? Because it doesn't understand that type of conversation. So we start with our query. Our query is going to be embedded. That embedding is going to be compared with whatever we have in the database so it knows what's similar, my query and the document, and it brings me the top five, top 10 documents. Then I can just have that and pass it to the LLM and say, here is my context. I would like to know what are the most common techniques that are used in the world, right? Something like that. So let's do that. And that's the last example. So just making sure I want to go over that. Oh, sorry. By accident I click all the buttons. And then let's get rid of this. All right, so this is gonna work through the whole thing and once again the code is going to be, there you go. So we import our attack-byton-client library. We define some variables. We initialize the attack-client. I wrote a give me all the groups function, right? Give me all the groups, that's pretty much what it does. And then I start getting information about all the groups that exist at Mitre Attack and some of the information such as techniques, et cetera of Mitre Attack. Then I create the way how I want to structure my documents because the way how I was before, it just has information that I don't want. I just want to have specific things in there. So that's what it does. It just allows me to pick what it is that I want my documents to be like. And I'll go down. So I start processing, creating markdown files for every single thread actor. And that's what you see in companies that are tracking Intel, you have a thread actor, you have a summary of the thread actor and you have connections of course to evidence and things like that. But the primary overview of the thread actor is super important, right? So we go through all this. And now we use a tool called Land Chain, super useful. If you want to dive into this example, it simplifies a lot of these things. It has a lot of wrappers around some of these APIs that I was showing you before. And then we start capturing information from all these documents with Land Chain. And then we start actually loading every single group into its token representations to understand how many tokens we have. So pretty much this is telling me that there are documents that do have 176 tokens. There are documents that have over 7,000 tokens in the document. When you interact with these models, there is also a limit of how many tokens you can pass when you interact with that model. That's what I do this right here. So in order to avoid these large tokens per document, we start splitting the documents. And this is also an art. I didn't know. Splitting a document, it's an art when you try to pass it to an LLM. Why is that? And maybe I could show you here. You can actually split documents in chunks of 500, for example, let's say tokens. And then the overlap is 50. What does that mean? But if I have a document that ends on thread actor x, and then the other one starts with something else not related to that thread actor, especially when you have long documents of Intel of a thread actor, sometimes you don't keep the thread actor names all the way at the bottom. So maybe the information at the bottom might not be pointing to the top. And it's important to have some of that context in every single chunk. So the first thing you can do is say, I'm going to take the first chunk, overlap that chunk, and try to capture maybe some of that information. So that way I start making my chunks more similar. Another technique is in every single chunk, you just put the name of the thread actor. And now you have context of one chunk associated to the thread actor. So that's another technique that you can do. So once we do that, we're going to just split in all the data. And then I'm going to export it as a .jsonL file. And why is that? It's because if you truly want to collaborate with people, if you really want to share your information, like something like this, what I did is I created this document that what it does, it gives you the ID of the chunks. It tells you that this chunk is 0 in 0, 1, 1, 2. So 0, 1, index 0, index 1. Those two chunks belong to one document. And then I provide the text how the text was split. So now that I'm talking to data scientists, they want to know how I prepare my data. Now I could tell them this is exactly how I would do it. And maybe you can use it on your own. And that's better because some of them don't understand some of this information, but our role as a security researcher is to be able to share our knowledge. And that's what this presentation was about, to be able to take some of this knowledge that they know and how we're going to start using it to start even parsing what we know so they can use it. So this whole document has that splitting, how many chunks, some chunks, all the way to even seven chunks, for example, et cetera. So then we almost done in there. And then we create a database with all these chunks. I want to skip that one over there. What I do there, I just simply save all that in a pickle file for those that do Python programming. I just saved all that information. Now I can start querying this knowledge. So I do the same thing as before, import my variable, set my key. And now I set my database as a retriever object. So it would allow me to retrieve information from there. And then I just pass one query. My query is, what are some of the phishing techniques used by threat actors? When I pass these through the retriever, it's going to give me documents that are related to my question now. So now I can enable question and answering also with this information. So now I take all these documents that I received. And then I pass that as context. And then I ask the same question. And the response was, threat actors have used spear phishing emails with malicious attachments, links to HTML application files embedded with it. And you get the idea, right? So it was able to generate context, new texts, generate texts that now allows me to simplify the way how I interact with my Intel database, for example. So if you're an Intel analyst, you might want to do that, just to get some more insights that you might miss, because you cannot read thousands of documents, splitting chunks, and then ask questions. We can't do that. So this is what this is about. And you can apply this idea to anything, any type of knowledge in the security realm. There's a lot of stuff that I'm super excited to dive into the next couple of weeks. So that, at the end, by the way, I took my JSON file and now I contributed it to the hugging phase hub. So now it's a file that people can use, download, directly interact with it. So that's my contribution to the community of the AI community. And that's all we have. This is the link for the GitHub repo, which, by the way, I just want to make sure that it is private. So hopefully, it lets me do that. And OK, real quick. Sorry. Oh my god. See, that's inconvenience. And just making it, let's see. And of course, I need to disconnect this, right? So otherwise, I'll do it when I'm done. But what a way to finish the keynote with a contribution to the community and share some of these concepts so that you can take them back, experiment, inspire yourself, and just share your knowledge. This GitHub repo is to do that, is to share some of these experiments, and then you can take them and then make them your own repos and all that. Thank you very much for your time. And I hope it was useful. Thank you. Hey, thank you, brother. Yeah. Reduce the number of false negatives while not blowing out false answers. Yeah, basically. And I'm looking forward to the Q&A. It's going to feel like. All right. Hey, Louie Lou. Good. We took a shorter break. We're already a little bit behind schedule with every good conference. So for our next block, we have the detection block. Our moderator is Jared Atkinson, is a security researcher who specializes in digital forensics and incidence response. He previously led incidence response missions for the US Air Force on team detecting and removing advanced persistent threats. He also loves open source and is a lead dev of power forensics and uproot while blogging about detection engineering. So please welcome Jared Atkinson. Perfect. All righty. Thanks, everybody, who hears detection engineers, blue teamers, I assume most people, right? That's why you're interested in the detection block. The next talk that we have is going to be a talk that's really quite interesting. One of the biggest problems that we have in detection engineering is this dichotomy of false positives versus false negatives, right? So we have this issue to where there's two types of error that we have to deal with, right? So what we don't want to do is we don't want to detect benign things and report them as malicious and then destroy our networks as a result of that waste resources. But we also want to make sure that we're not missing malicious things. And the problem is that those two things are inversely related, right? So as we reduce the number of false negatives that we're going to encounter, right? We're going to start to really deal with, deal with trying to find things, cast a wider net. We end up creating a bunch more alerts and we have this problem with alert fatigue and things like that. And so our next presentation is going to really get into that. And our speakers are Emilio Gonzalez, who works in a blue team in a large Canadian organization. He loves to participate in CTFs and create challenges to introduce people to some defensive aspects of cybersecurity. And he's a co-organizer for Mantra Hack, a monthly CTF workshop in Montreal. And then I told Remy that I'm probably going to mess up his name a little bit because it's very French. But we have Remy Langevant. Was that right? Okay, cool. And Remy's been working in a blue team for a few years as a threat hunter and developer. In addition to threat hunting, they both like to talk about indenting their codes. So apparently they're zealots. Are you tab or two space tab? Okay, so that was something we should have saved for after the talk, because now you have an uphill battle going against everybody. But welcome, Remy and Emilio to the stage and we'll go ahead and take it away, guys. Best, yeah. Hi everyone. It's good to be here. So today we're going to talk about detection engineering, alert fatigue, and dealing with security operation centers, limited resources, and how we try to overcome that. My name is Emilio, but I've already been presented. So let's skip that part. And same goes for Remy. So for the presentation, we're going to talk about our context and the problems we were trying to solve. Indicators, what they are, why they're cool, and when not to use them. We're then going to show our implementation of a platform that leverages the concept of indicators, which we call Yamaha. We're going to suggest a free and open implementation for people who wants to try the concept without having to think. And we're going to close with takeaways and the future of Yamaha. So context and problems. About our organization, we're a fairly big organization with about 50,000 employees, which means we manage lots of endpoints, both workstations and servers, more than 70,000. For the workstations, it's mostly Windows with a small amount of Mac and Linux. For the servers, we have both cloud and on-premises infrastructure with a good mix of Windows servers and Unix servers. We have an EDR installed on the vast majority of machines and a TLS inspecting corporate proxy that sends logs directly to the sim, which means we have lots of telemetry to build detections on. For the SOC, we're a fairly big SOC with more than 20 SOC analysts slash trial hunters slash detection engineers. And we have a very heterogeneous network that carries decades of IT work at a time where security wasn't a main concern. And this is kind of important when doing the detection engineering to be aware of that. So let's say you wake up today, you say, I want to build a good detection. So what are the characteristics that detection should have? First, you want no false negative. So you want to detect all malicious behavior you want to detect, but you also want no false positive. So you never want to detect a legitimate behavior. You want the detection to have a small time footprint, which means you don't want to exceed the SOC's capacity to respond to all of the alerts of that detection, which either means not many alerts or the alerts can be investigated quite fast or in the best world, both. You want the detection to be fully actionable, which means an analyst will know what to do to investigate and respond to that alert and can actually do it. And finally, you want the detection to be maintainable, so not bloated with exclusions and overly complex logics. So when you look at the detection, it's clear what it does. So you guessed it, a good detection is more like a perfect detection. You're never gonna get a perfect detection, but you want to approach these characteristics. Small word about alert fatigue, because this is really important. Alert fatigue is a phenomenon that occurs when cybersecurity professionals become desensitized after dealing with an overwhelming number of alerts. So they start to overlook or ignore them and have slower response times. So if you work in a SOC, this is really important to understand. And if you haven't worked as an analyst dealing with a large number of alerts, which all look the same and are all false positives, it might not be obvious that this is something real, but it is, believe me. And if you build detections and you don't take into account alert fatigue, you might have a false sense of coverage or security because you said, well, my detection is built, it detects the stuff. But if you don't consider that analysts have alert fatigue or can have alert fatigue, you might believe that if something malicious happens, it's gonna get detected. Whereas if the detection is like 99.99% false positives, analysts perhaps will take mental shortcuts and ignore stuff that's actually malicious. So usually we want to address that. And one of the main way to address alert fatigue is reducing the number of false positives. So how can we handle false positives in a SOC? The most obvious answer is add exclusions to your detection. And this is usually good. It lowers the amount of false positives, but the more exclusions you add, the higher the chance that you're gonna miss something that's actually malicious. So, meaning increasing the number of false negatives. So you need to be careful. You can have a triage here. And not everybody uses the term triage the same way, but for this presentation triage means having either analysts or automations that look at alerts and close quickly, obvious false positives. And this is usually good to do, but we need to be careful because it actually increases the likelihood of alert fatigue for the analysts that look at the alert first. You can hire more analysts, but this costs lots of money. Doesn't address alert fatigue. And in a talent and labor shortage, it's harder than ever to do. You can automate contextualization in response with a SOAR. And that helps greatly, but it requires a lot of time to do correctly and resources. And it's not a silver bullet. Some stuff is still gonna have to be done by humans. And it's okay like that. And perhaps the most common is not using the detection. For detection engineers out there, probably happen at least once. You looked at the technique, you said, well, this is so easy to detect. I'm gonna build a detection, looking at process creations with that filter. You build a detection, you say, I love this detection, it's gonna fit so well in our environment. You put it to prod, you wait a day. And what happens? It triggers 300 times that day. And you're like, that's too much for my suck of three people to handle. I'm gonna add exclusions. And you add exclusions. And then it triggers, it's better. It triggers 30 times every day, which is still too much. So what do you end up doing? You take the detection, you cry a bit, and you throw it in the garbage. Which keeps your blind spots, which means infinite false negatives. But what if there was a way of keeping that detection instead of throwing it in the garbage? So yeah, we decided to turn to end skaters. And what are the end skaters? What do we mean by end skater? It's the output of detection logic, of a rule, our content match, whatever you want to call it. But it's mainly the output. The main output you know is probably the alert. So there's a second that is that will investigate the output of this detection logic. But we added one that is an end skater. So what is an end skater? And what do we do with them? The feedback. All right. So yeah, what do we do with them? We correlate them to trigger incidents. So let's say you have a phishing attempt like an end skater for a phishing attempt, but it's not precise enough to get an alert for it. And then you have the same thing for persistence afterward and something else. And then you might just tag them and build an alert for the whole incident. You might also use them for additional context. So let's say you have a ticket to investigate. You wanna see for a host what else has happened on the same host during the same time period. Well, you can just use what has already triggered as an end skater on the same host. You can also use them as treadmill leads. So if you just put them like in some UI or some data store and your tread-on doors want to just like browse them and find the malicious stuff. Well, that's a good one. And also you can use entity model to score the entities to just like help the tread-on thing prioritize their leads or use them for triggering alerts with high scores. And you can do that for machines, users, domains, and especially cats, because they are kind of malicious or at least suspicious. So yeah, what are the benefits of using them? Well, when we don't have one detection equals one alert, we may bypass some of the negative side effects of the false positives. So what does that mean? We can do like detection logics that we would have just thrown away like Emilio said, or otherwise, yeah, just it would have done some other fatigue for the second artist. So we can implement them. They are less precise, but it's fine because there are not action done for like in a one-to-one manner. It also enables us to detect suspicious stuff, but not really abnormal. So let's say you have a net users, your sys admins do that. You may be your developers or somebody that knows a bit of command line might do that, but it's not malicious, but you want maybe to get an indicator for this for if there's an incident, you might just use that to help you contextualize. And they often stack together to build some incident or a story that helps you investigate or trigger alerts. But can you put anything in these? No, garbage in, garbage out, as they say. They should be meaningful. They should be in tune. Also, if you have like a lot of false positives, you should do your exclusion and everything. Like something that triggers 10,000 times a day might be fine for informational and skater, just like there's no scoring. It's only there for contextualization, but maybe not for high severity because you don't want to score on a lot of noise. And they should be an indicator of attacks. So let's say you have a process making a lot of DNS queries. Maybe Chrome is not a good process for that because it's basically the main thing that Chrome does is DNS requests. Okay, so maybe you're thinking these are cool. So I want to implement in skaters, but there is basic stuff to do before that. Well, you need a good budget, like anything in security. You need security-minded developers to leverage these in skaters, build platform around it, and might make this valuable. You have basic stuff like proper assets and software management. The SIM, EDR for walls, anything that, the log source, basically, because you need them to build detection logics. If you don't have this, you won't build detection logic at all. And they have built in the detections if you don't have people to work on the detection logic team. So this is the basic. Then you want people to just investigate those alerts if there are alerts and nobody's there to answer them. That's not good. And afterward, the detection logic team and the most critical detection before in skaters, because in skaters are just there to circumvent the blind spot you have with the critical detections. And also a challenging team, maybe that can just leverage more, yeah, to get more leverage on these in skaters. So let's go to the implementation. What did we do? So there's four blocks. There are the log sources, the enrichment sources, the analytic platform, which is Databricks and the outputs. So first in the log source, you want a SIM. They are probably already some detection logic implemented there. You can put some simple in skaters there and forward them to the analytic platform. Other log management tool, the DVR, firewalls, everything. You can maybe pull directly the data from your analytic platform. There's also the enrichment sources, like Draristuttle for domains, binaries, just to get context for something known. Top one million domains, UIP, et cetera. And then there's the, in the analytic platform, there's the hunting data, which is like, what are the binaries we have seen? One word they first seen on how many hosts, same for users, binary domains, et cetera. And then all of this goes into the notebook magic, where the scoring and the indicator logic happens, and we output this in a data store. And from there, we have three type of outputs. We can open directly a ticket into the ticketing system to get an investigation. The SOAR can pull the data and contextualize investigation automatically. Or there is the Power BI for manual user queries that want to explore the data. So yeah, in the notebook magic, we explored three main scoring algorithm, which we, one we called TF IDF, which is a frequency analysis, basically. So if there are many entities that trigger the same indicator, you get less information from the indicator. So the score is just like lessened. And if it scores on like a small subset of machine, then the score just get boosted. There are the indicator severity that we might just like a security knowledge. If we want, we say this is a critical, we give it a thousand point, and then high, we give it a hundred point and so on. This is like the simple score. And then the inner channel is this, it's more like a baseline if a host or entity gets like a score, it's stable for a week, and then get a spike in score, then we can just boost its score with using like the delta. So yeah. So time for a demo. All right. So it looks like this for an analyst, and this is the main page. So this is mostly for threat hunting. So it's a Power BI report with many pages, a Power BI application, I think it's called. So as you can see, there's a slider here to determine the date range you want to examine, and the data gets updated quite fast. So we can see here the indicators that I've triggered, their severity, so critical, high, medium, low, and informational, and here you have for a given date and a given computer a score. So the score Remy talked about one minute ago. You can filter easily, so if you click, it's gonna filter for that specific computer day, and you can see which indicator triggered for that machine. So you can go into the details here, for example, the command line. Power BI has a feature that's called a drill through, which is pretty interesting to go to another report by passing arguments. So we can have here a better view of the given date for the given computer, and you can see the indicators that I've triggered with a bit more space, so it's easier to threat hunt. And this might be a bit confusing, this is anonymized data, and the indicator name are taken from the Sigma repository on GitHub. You can also drill through to show all the detections for that computer. So it's interesting because you have this now, and the slider to determine the date, and you can see how many indicators I've triggered for that specific machine every day. So we can see spikes and potentially suspicious activity, and you can filter in the same way that Power BI allows, and you can see what triggered and when. But it's not only for a threat hunting, as we said, we can use them to pivot from an ongoing investigation. So let's say there's an EDR alerts that gets triggered. The analysts that look at the alert, they want to see what kind of indicators have triggered, so we have a page for that where you can input a machine and see basically the same stuff that we've seen a wild threat hunting. So you can see which indicators have triggered, and this might be helpful to pivot during an investigation. And we actually have a success story with this. We had a server that had a web application that was compromised, and during the investigation an EDR alert triggered. And by going into Yamaha, we had a small but no easy indicator, which was a rare domain was accessed on a small amount of machines. And by looking at the domains and pivoting further, we were able to confirm that this machine was indeed compromised, and Yamaha was a key part in that. Returning to threat hunting, as we've said, we have multiple scoring algorithms, so we have multiple views for them. So this one is the TFIDF. So as Rimi said, the more an indicator triggers, the less the score, and the more rare the indicator, the higher the score. So if you notice, those are different machines. They are scored by score, TFIDF. So different algorithms are gonna give different machines. So for threat hunting, having multiple views is really great for all hunting activities. We've talked about scoring machines, but we're not required to look by machines. So we have a page here looking for domains. So indicators can be triggered for domains. And if we look here, we can filter for a specific domain, that weird domain.windows.net. We can see that three file names triggered that indicators for that specific domain. So this is EDR telemetry. So we look at DNS requests, and we map it to the process creation to see what's the file name and what's the command line. So again, very helpful to threat hunt. We can also filter by MITRE attack tactics. So when we build our indicators, we associate them with MITRE tactics. So if we, for example, want to do threat hunting for only defensive agent, well, we can filter that. And it's very fast with Power BI. We can filter only for specific tactics. And finally, that page, which is not for investigation nor for threat hunting, it's for the detection engineers. So you can look at indicators that have triggered. So you can see how many indicators have triggered every day. And you can see for how many computers, for example, a given indicator has triggered. So we can see here the node process execution indicator. It has triggered for 48,000 computers, and it's a medium severity indicator. So perhaps we should add exclusions or lower the severity for that indicator because it triggers way too much and we can filter. And as we can see, we can look at how many indicators have been triggered each day. So for example, 12,000 here. And then it drops to 392 a day. So actually we added an exclusion to that indicator using that page because we saw that it was way, way too noisy. So back to the regularly scheduled programming. All right, so if you wanna try this at home or just like not spend a huge budget on this, we just list for you some open and free alternatives. Yeah, so for the job orchestration, you need something to coordinate the execution of the notebooks to tell which one is first and then like there are dependencies and when should they run. We use data factory. Apache Airflow is more than capable of this. Yeah, so when the notebooks run in the Databricks and Jupyter notebook is pretty much the same in the free and open world, but it doesn't have a managed data store. Neither it has managed clusters. You have to do this on your own in Jupyter. For the data store, we use Delta Tables. It is also free and open. I just didn't put it in there. But you can use like a PostgreSQL or a file system, whatever you want that the analytic engine can parse which for instance is Apache Spark and it is also free and open. And for the presentation UI, we use Power BI as you saw, but I don't think there's anything that matches Power BI in the free and open world. So you might just use Jupyter notebooks with IPyton widgets and some graphing libraries or Kibana from the Elastic Stack. And you need data source, so we use a SIM and EDR, but in the Elastic Stack, you have also Elastic Search and Elastic Defend that is built in with maybe Sysmon or other default Linux and you should be able to get going and do some of this stuff. What are takeaways and the future of it? Well, they are cool. It's pretty nice. We use them every day. You need a good, toxic and environment coverage, like anything, but especially a good, toxic coverage. If you wanna build up incident from different step of the kill chain or something like that, you need to like, don't just focus on the execution or the evasion. You need to get all the way, all the other tactics of the minor attack. It does not replace alerts. It's just for the blind spot you have with alerts that you cannot just trigger every time an alert you use an indicator instead, but you should fall back, like it's only a fallback. You need a good maintenance. Like in Mio, I just showed you. If you don't do this, maybe Node is going crazy and then you will have like 40,000 indicators per day and that's not good, so you have to tune them. Your environment is changing. Your data sources are changing, so, yep. And many views, many scoring, cause some scoring or just don't show up, like some small noisy attackers or a tractor. So you need another way to just make them pop up. The next steps, we want automated alerting based on like a sudden increase in score. Let's say a host is giving abnormally and it has like a spike in suspiciousness. We want to open an alert from this. We would want also an analyst feedback loop. So, you know, if there's an indicator that is not quite good enough and the analyst just say, yes or no, this is good, this is not, it labels data. We can, it help us tune the indicators, but also we might just use that for automated tuning or suggestion for exclusions and that would be great. Better indicators and coverage, just like any detection logic stuff. And something like exploring new scoring algorithm, like something ML or machine learning. And I hope this has inspired you for your detection logics, so thank you. Thanks. I'm on it. Testing, testing. There we go. Okay, that was a great talk by Emilio and Remy. That's a really cool project. I'm really interested, after Olaf's talk, we're gonna have a little panel discussion and I wanna try to dig into some of the questions that I had during the presentation, things like how are they developing the scoring, you know, certain topics like that. Our next speaker is Olaf Hartong all the way from the Netherlands. He's a security researcher at Falconforce and a Microsoft security MVP. He specializes in understanding the attacker tradecraft and thereby improving detection capabilities. He has a varied background in blue and purple team operations, network engineering and security transformation projects. One of the things, so Olaf's going to be talking about how they manage detection pipelines at Falconforce. And I think one of the really cool things that's about what they're doing is they're professionalizing kind of detection engineering. So one of the big problems, I'm a consultant in my day job and one of the problems that we run into, especially with threat hunting. Who here does threat hunting or interacts with threat hunters? One of the big problems, my original background in the Air Force was I was on the Air Force hunt team. And so I've kind of been around that for 15 years or something. And the thing that we run into is there's a lot of ad hoc nature in threat hunting and there's often a lot of tension between the sock and threat hunters. And the problem comes in that the sock is responsible for security all the time and threat hunters get to kind of take this smaller scope problem and be more focused, right? And one of the big issues from my experience that causes that tension is this issue where they're not feeding their output back into the steady state, right? And so one of the things that Olaf's going to be talking about is how do you manage the steady state? How do you make sure that things are working? How do you make sure that everything that you've deployed is operating as expected? And so I'll let you take it away from there, buddy. Thanks, man. All right. So this is the longest title that could fit on the slide. Basically, we're talking about detectionist code. So already Jared introduced me, so I'll keep it very short. My name is Olaf. I like warm hugs. I'm a father of two boys and I used to be a documentary photographer, so I never went to school for cyber. Doesn't really matter, I guess. It proves what you do with it. So I'm a tech engineer and security researcher and that's kind of important for the rest of the context as well. And I work at Falconforce. I'm actually one of the founders of it. It's a Dutch-based company, but we provide services globally where we are one team as red and blue, so we're sort of a purple company. And that allows us to do a lot of the things that we do also in this talk. So what you can expect here is basically why we started doing the detectionist code bit. What does it mean to us? How we document, store, and process and analyze everything. What are some of the benefits of that and then also how that can enable you to do automatic validation. And to start off a little bit there is we have a division that's called Falconforce Sentry and basically that's doing the detect, analyze, and respond bit for our clients. And that starts out with the most basic model that everybody knows, right? So you have all kinds of stuff that generates telemetry. There's all kinds of automated detection logic on top of that. That generates alerts or indicators and they boil down into an incident, hopefully. But there's some issues with that as was already alluded to in the previous talk. So stock content is relatively easy to bypass. Every red teamer can buy an EDR, test whatever he wants to do and knows that he can bypass everything. That's also what we do. And then on top of that, there's all kinds of time wasted by repetitive alerts. Alert fatigue as was explained in the previous talk so that saves me some time. But we're also not utilizing all of the telemetry that all of these tools provide us. And we don't know how well it's working and if it's even working at all. So that's why we started expanding that model into all of those additional green blocks that we're doing, right? So we're building a lot of custom content. We're doing risk-based analysis and scoring on top of that. We're building an engine that can automatically enrich everything based on entities, but also on context. So maybe bloodhounds or other stuff will be used into those factors. But that's not why I'm here, right? I'm talking about the automatic deployment and the replay of a tech bit today. So it's one of the aspects in our whole ecosystem that we provide. So the detection is code. It's basically removing some of the challenges that I alluded to earlier. So who changed this rule and what was actually changed, right? Because an analyst, if it runs in a sim, can relatively easily make a modification that tunes something which might actually break the whole detection or create all kinds of false negatives for that matter. But also, if I wanna make a change, what will it actually work? Will it break these kind of things? So those uncertainties will be partially addressed there. And also knowing like, hey, when was this rule implemented and what was actually changed the last time we made a modification and what was that? And when was that? And it allows you to also ensure some of the quality of the detections, follow best practices you might wanna follow and also keep the documentation up to date because nobody does that, right? So we all want to do it, but we're all too busy to actually keep track of it. And if it's still working, that's maybe even more important. Are the data sources still there? Is the detection actually detecting the attack that I wrote it for? So all these kind of things we wanna address. So detection has code to us as spoken for, but basically most of the community will probably follow a similar thing is we follow an agile process just to keep everything organized. We use a simple language as a markup language for all of our detection logic and we plan for reusability. And this is very important to us because we provide a service as consultants, but even in an internal organization, you might have multiple levels of detections running in different environments or you did an acquisition and you can't merge your SIMS yet. So you have to deploy it on two places and you don't wanna copy-paste everything. Version control, as I started talking about earlier, is super important. And it's driven by automation because we're a very small company, we're also kind of tending to over engineer things for the better because we wanna do cool stuff instead of replicating basic actions the whole time and the unit testing based on realistic alerts. So we're not generating base telemetry, we're actually executing attacks constantly automatically because then we actually know that it's doing what we expected to be doing instead of triggering on a base set that we generated at some point that might not be realistic for the new way of attacking it. So from that agile process, it's mostly common as everything, right? So we have a backlog, we prioritize, we write dedicated documentation per detection, then we test and review all these changes. So we test it in our lab environment where we have basically a small enterprise that we simulate and we review all the changes. So if I write a new detection or make a modification, then one of my colleagues actually has to look at what I did to make sure that we're doing the right things and not missing anything. And like every agile process, we have some standups which is like 15 minutes twice a week. So it's not over complicated there where we also track progress, have some discussions and some planning sessions. And then we also organize our maintenance there because feedback from research or from developing detections is actually super important for the maintenance of the whole stack because we might have found a new way of detecting something or have a more optimized way of the query language, these kind of things that we can then factor into all of our other ones if we want. So from a very simplified example, I have a new idea for detection. I put it on the backlog that gets planned in the planning session. And there we might, during the discussion already, figure out, hey, this is not feasible. We're not gonna be able to do that. So we abandoned it immediately. Otherwise it moves on to the R&D phase where we are starting to replicate that attack, looking at the telemetry and basically writing a detection for that, which still can go wrong, right? There can be all kinds of reasons why we can either abandon it because we can't have the telemetry for it in a feasible way or it gets blocked because for whatever reason, the telemetry is not there yet, but it will be. So then it gets blocked and we put it on the backlog later or we abandon it at some point because it takes too long or it's not that prevalent anymore. But let's say we actually succeed, which usually is the case, fortunately. We write our detection and we commit it to git, do a pull request, and it goes into the review and testing mode. So basically we already automatically deploy it, which I'll show you in a sec, to our test environment where some colleague can actually review it, look at the output, look at how it's running, look at the performance, and the quality of the code and the documentation. And in some cases, there might be some feedback where it has to go back into R&D to improve or modify or make some altercation and then it can still be abandoned, right? So that's always the case. And if it's okay, then everything is happy, everything is fine, it gets accepted into the main repo and it gets deployed to one or multiple environments. So no surprise there, we use a Kanban board where we have the whole flow that I just alluded to into multiple slim lanes. So we have a swim lane for use cases, we have one for our text scripts, we have one for maintenance, and we have one for our enrichment and analysis pipeline which didn't fit the screenshot. And then we go into that file format. So make it as simple to maintain as possible. You can have JSON and all kinds of funky stuff. We work in the Azure environment primarily, so we use Kusto as a language and they use ARM templates to deploy it, which is a huge JSON file, which is horrible to maintain. So we chose YAML because it's super simple, we can still programmatically look at it and process it. So from there, try to be as expressive and simple as possible and plan for the code you use that I'll give an example for. And we can have codes we reuse but also all kinds of lookup lists. So we have a list of the LOL BAS project, we can just have that in a single place and not put it into all of our detections because that's way much more work to maintain. And if you have it in the central place, you only have to maintain it once. But these kind of simple things are super nice. And then since we use YAML, we can actually build a schema for our whole document. So we can validate it programmatically, but also it's super easy to maintain it because you can only choose from a couple of options on some of the fields. And then the next one is also designed for the ability to deploy to multiple environments. Even if you only have one SIM now, as I explained earlier, you might acquire a company or merge or whatever wanted to run in test and prods, you can actually do that easier if you do that this way. So what does it look like? So we have some identifiers in our document, right? You need to give it a name. You need to give it an ID. And the ID is basically what it will look like in the SIM in the end. Then we tag it with all kinds of stuff. So we call boosted pack or suspicious behavior. You can basically make up the list yourself. We have some qualifiers in which operating system are expected false positive rate based on our experience. The severity of what I give to it and of course, mitre tags for tactic technique and sub techniques. Next, we also provide a list of data sources that the query utilizes so that we can validate fairly easily if the data is even available. So we tell it to provider, so where does it come from? Which event ID or event name or action type, whatever it's called, which table it goes into. And then also again, the mitre tag data source components that they classify with that. So then we can generate overviews which I'll show you later. Then everybody's favorite bit is the documentation, especially the analysts will love you if you do this well. Where we have a brief technical description of what the query actually tries to do, a description of the attack. So what does the attacker want to achieve with its attack? So where are we looking at? Some considerations which could be anything for the analyst to be aware of. Some known false positives, so that's easier to classify. Some known blind spots because you can tailor a detection in a specific direction knowing that you're not looking at that bit. So you can classify that there, right? So we know that we are only focusing on this. The other thing is addressed in either that detection or we don't do it because of X, Y, Z. And then a basic response plan. And of course that depends on your internal requirements, what you want to do with this. For one client we built even a whole playable design in there which would automatically be deployed into their confluence instance with graphs and everything. So this can be very flexible. We also maintain a change log within detection so that we know what was changed when and what the impact of that change might be. Again, this is for multiple environment tracking extremely useful, but even for generic overviews you can at least have some basic indication what was changed without having to go through the whole commit log. Then some basic deployment information. So where do we want to deploy it? So the language in this case is Kusto, but it could also be SPL or whatever you like. The platform that we intended it to be deployed to, so in this case it's Defender for Endpoint, but we can also deploy it to Sentinel and then we can have some of the how often should it run, which entities should be extracted, these kind of information to it. And then of course the most important bit is the query. So this is a relatively simple example just to make it a bit legible. And we use some GINJA formatting that I'll show you later and I'll explain why. Because this is where it becomes powerful with the detection as code. And then you get basically a structure like this. So I mentioned the schema support already and this is kind of nice because in VS Code there's already a way that it knows, hey, for permissions required you can only use this. So if you type anything else you'll get an error so you can't even fully commit it yet. And it also makes it easier, right? So you know I can type this. You can just select it and click. So that GINJA templating that I mentioned, this is what I really, really like, is GINJA is basically a Python module that you can use and allows you for templating. So all the green bits are basically template blocks which we can automatically replace in our pipeline and we can even specify defaults, so in the, that doesn't work, but in the top line you see a timeframe with a default of one hour. So if I don't specify this at all, it will take the default. That's basically sort of what you have to get from that. And then we can have also even if else statements if we want to make it really fancy. So we can actually be super flexible. So in example one, we have a customer that doesn't want to exclude the servers and there's some filtering for their environment there. So when we run the pipeline, we basically get an output like this. So the timeframe wasn't specified here, so it uses the default. Also the initiators one uses the default, then it adds those two green lines that were mentioned here as a post filter and it will generate the rest of it because the exclude servers is true. So this whole block gets added too. For client two, they want it to only run every four hours. They don't want to exclude the servers and they have PowerShell added to it. So basically when we ran it there, it looks very different, right? Even though the same base query is there, we can make it extremely flexible this way, which makes it also way easier to maintain because we only have to maintain this bit and then two clients get a new detection. That's sort of why we started doing this. And this is also where version control becomes very useful because we can track all changes via commits, we can enforce or at least have peer reviews. So we can, yeah, depending on your organization and your procedures, you can enforce it or not, but you can also make the sort of agreement that hey, you never push your own stuff into the master branch or main branch unless it's extremely important, whatever. There should be a full-back mechanism. And this allows also, so if I make a mistake, I accidentally push it, I can roll it back as well. And the awesome bit here is we have one single sort of truth now. So if an analyst for whatever reason makes an adjustment in production, our pipeline will override it. So the chance of faultiness is way lower this way. And that also allows all kinds of cool stuff because now we have it in Git, we can do CI CD based stuff where we can do all kinds of actions, automatic deployments, but also validations on it. So yeah, this is basically what it looks like, right? You have all kinds of nice plugins for VS Code where you can actually see who made that commit and what they did, when they did it. And this is basically how it looks on our end. So we can see that syntax validation and the deployment was successful. So the review already knows how to code at least ran and is up to our standards. And that still has to look at it to see if also the detection is. And this is where we go more into the automatic validation and deployment bit. So from a pipeline perspective, it's kind of nice because it enforces or it enables us, I'm not sure what it enforces, but it enables us to do static and dynamic testing of every detection that we built. So we can do linting, we have a language server, and we can have all kinds of practice checks built into our validation so that we don't have to do that manually every time because people overlook stuff, everybody's busy. And it also allows us to be sure that all environments that we talk to are running the version that we want. And of course, best of all, it generates all kinds of documentation, playbooks, whatever, and put into the place that you want it. So, and last but not least, is also that it allows for automatic detection validation in most cases. And in most cases, some attacks are very hard to script. You can't sometimes script it, or it requires certain permissions that you don't want to have a pipeline have. Like nobody wants global admin on their pipeline in Azure DevOps because there's all kinds of nice tech techniques that you can then apply to it. So basically per environment, we have four separate pipelines. So one that does the test cases, so it does the attacks. One that analyzes the attacks. One does the standard app outputs to get and one that is doing all of the other magic. We call it hatchery because we're a falcal force and we have a lot of eggs. So basically what we have is every pipeline has multiple stages. So in the first stage, we validate the syntax. The second stage, we deploy it to our ball pit, basically our lab environment and we do a dry run deployment to our production environment. Where we only feed it and see if it's not complaining basically. Because sometimes the data they brought is different than the lab. So we wanna make sure that it works on both sides. Then there's a pause that can't be visualized but basically there is the review pit. If the review is accepted, the wiki is deployed or updated and it's deployed into production and enabled and these kind of things. So from a pipeline perspective, there's all kinds of steps. So you can see that it validated all detections and no warnings. So on a warning, it will only complain but it will move on but we can also generate errors based on whatever is in there. And that's the same on the deployment side. So now it checks whether the rule is already there and it's the same as what we wanted to deploy and then it's skipping it. If it's a new one, it will just deploy what's new. If there's something different, it will actually pull it, store it in the log and then push the new stuff because that's what we agreed, right? The git bit is our single source of truth but at least we record what was there in broad. So even if it's a small change, we know that somebody made a change, we can have a look at it and we can actually, we can either decide whether we want to include it into our Git repo or not or talk to the person who's like, hey, why did you do it there? So it's just a nice check. And then the same goes for the update of the wiki and the deployment of the production environment. So those are way simpler because we already checked everything. So some of the errors are there. Not the easiest to read but once you get used to this flow, we're actually quite flexible there and it will already tell you like hey, some of the things are missing or whatever. So we made some changes to that to make it more user friendly. One of the other things that we have in the pipeline next to a linting bit is we built a language server for Kusto which is the query language that we use. And we can do offline schema validation and it enables us to do some other things. So basically what it does, it automatically gets all the documentation for Microsoft and updates the schema that it's actually working against. So it's sort of emulating a Sentinel or a Defender for Endpoint instance which allows us to build some additional custom parsers and things in it as well. And that's what I just said. So we open sourced it and we also host an instance which you can run in your own pipeline if you want. We don't log anything so we don't wanna know what you're doing with it. And basically what that does is it uses those ARM templates that I started out with. So how you deploy it to Microsoft, you also push it against this API which basically sets which environment in it, the query itself and of course a lot of other stuff but it ignores that. And you can just feed it into the API with the simplest query ever. And it gives a lot of output. So you can't read this. So one of the things it does is it tells you which columns are coming out of your query. So what is the result in fields that you get back? And why do you want that? It's basically if I expect something as an analyst or I have entity mappings or other stuff in my detection logic and in my documentation then you wanna make sure that's actually there. But it will also do all kinds of parsing errors. So if there's an error in your query it will already tell you here before it actually goes into a production instance or test instance. And also it tells you some other nice things is which tables you're working with and which reference columns, basically which columns are you querying. So if you don't have that data then it will start complaining about it. And afterwards it will start generating all kinds of documentation overview. So we have a sort of a list of all of our detections which platforms they're using and these kind of things. It's not the most pretty, but it's super efficient. And per client we also generate an overview of which version are they running? What is the latest version? How is it called in their environment when it was published? What's the status of it? And does it need updating? Because in some cases the client actually might want to have an older version because they're still running, I don't know, 40 nets from five years ago where the schema is actually different than the one that is in our current detection. So we want to have that flexibility there. And then what we started adding as well because Wiki is nice, but Portal is nicer. So we built a portal that's automatically generated every time we merge something into the main branch where we can have all kinds of easier dissections. So we can quickly see, hey, do we have a detection for AWS? And then you can just click it and get a list of an overview. And you get a prettier version of the documentation just rendered this way. It's all marked down on the back end. So it's the same and might be a bit familiar, but it's a little bit friendlier to look at. And of course we can also generate all these nice fancy heat maps. We color it based on prevalence of detections. So the darker the color, the more detections we have for it because you can't always cover everything in one detection, right? And then last bit is basically the unit testing. So now we built all these detections. We roll them out, but how do we know they're working? So we had a couple of goals in mind. So we wanna make sure that the agent actually is logging the events that we expect when an attack is happening because otherwise, how can you detect it? The format of the logging is still consistent like the 14 net example, but it can be for anything. Also Microsoft has a tendency to sometimes change schemas and not always be super exact in their way of logging. So we wanna make sure that it's actually still doing what we expected to do, but also are they even arriving? So yeah, if it doesn't arrive there and it's generated, what's the point of having it? And then the last one is also, is there an out-of-the-box? It's kind of doing weird. Is there an out-of-the-box detection coming from the EDR, for instance, now on a detection that we also wrote? So then we can have a difference to see, hey, does it actually make sense to keep it? Because maybe the out-of-the-box detection is equally good, so we can deprecate one of our own detections, saving some compute time and maintenance hassle. And ideally, each use case can have like at least one of the test cases that we have. Sometimes it has multiple because an attack can be executed in various ways and it's incorporated into detection, but we flagged that all. So some design principles that we had is where possible actually do the attack, so don't mimic it. But also, most importantly, and this is where a lot of the other tools miss it, I think, in my opinion, is where if I execute an attack, I also wanna make sure that the attack succeeded, so I wanna measure that because I can run a script and move on and then see, hey, the detection didn't work and then you're looking at the problem at the wrong end, maybe, right? If the attack already failed, then the detection will never trigger. So how do you validate that? And have variables that can differ in every environment, like a domain name, file name, location, username, these kind of things. So you can be a little bit more flexible there. And if you do an endpoint-based thing, focus on the EDR and not the AV because AV is trivial, everybody can bypass that with a little bit of effort, so our detections usually aren't incorporating that. So, and that's where we started designing our own YAML format again, which is based on atomic red team, which is awesome, just extended it a bit with the things that we were missing from it. So from a logical flow perspective, we have a detection rule that might have an attack script that goes into that attack script pipeline that we run every 15 minutes, but you shouldn't do that, you should do it once a month or whatever if you do it on an internal basis. We just do it a lot so that we can quicker see results and have better maintenance that way. From there, the attack script can execute it directly on a host via SSH or some other agents that we have built ourselves, but we can also integrate with some of the commercial and breach-and-attack tools like Prelude, but we can also work with a caldera or some other tools as long as they can work with that YAML format that we have. So that then executes it on a target, the target reports the status back, including did it succeed or not, because that's very important to us and the tool or actually pipeline depending on where we run it feeds that result into Sentinel, but the target also has a role there because they have an agent running and they log stuff, so they either alert or log to Sentinel, alert gets generated and from there we can actually start correlating, hey, the attack ran, it was successful and it was detected or not. So that goes all into dashboard, which is the manual bit of looking at it. We can also run it from the pipeline that I showed you earlier and we can generate a Slack alert and say, hey, there's a different discrepancy from what I expected and what is actually there. So for instance, the attack is failing now where it used to work or the other way around where the attack is still succeeding but it's not detected anymore, so we need to look at it and do some maintenance and that can be either on the attack side or on the detection logic where either one of them is failing. And then to have a very simple example to what it looks like, this is just again a YAML format where we show where it's linked to, which detection we expect to trigger, so this is tied to one of ours but that can be many. So sometimes we trigger multiple detections with one attack and that also includes out of the box stuff from the EDR or the SIM. So we look at those as well because why not? Where we can run it from, so in this case it's only one variable but it can also be many from a domain name and these kind of things, some global dependencies. So we built one hijacking DLL which we run in multiple attacks. Just easier than that tailoring it completely. And then a basic script where it's also checking did it succeed or not and it cleans up after itself because we wanna have a clean system afterwards. And then some of those dashboard examples are relatively simple where we have a lot of detonations and we can also see over time how well is it doing and have an overall success rate that we calculate. So we basically know, okay, this is 100% of the time ran and the two red ones we need to do something about preferably sooner than after a couple of weeks, right? But that can happen. And then on the flip side, we also have a dashboard for all the detections which are tied to the text scripts but we can basically measure, hey, it executed 50 times and we detected it 37 times. So why? And of course you can click everything and drill down into the raw logs and see what was actually executed and why it's not detected. So in some cases we actually ran into some issues where alert grouping and these kind of things are also occurring which might be either a misconfiguration in the SIM or NDE in this case also wants to be smart and just group everything into one single incident and then it's not detected anymore. So that's sometimes also where these numbers come from but this only occurs if you do it in our way where we run it every 15 minutes which in a production or a large enterprise doesn't always make sense. So then you do it once a week, once a month and then you just have a pipeline that deploys an environment. You run all your stuff, you let it sit for a bit and then you kill the environment and you start off with a clean slate the next time and then somebody actually has to look at the results because otherwise, yeah, you can test it but if nobody's validating what is tested then you're wasting a lot of resources. So to wrap up, because time's up, just some simple things, right? So Detectionist Code provides a lot of quality control, automation. It's usually ease of life for most of the Detection Engineers or the people maintaining this and you have a lot of opportunities for review and improvements there. It ensures a single sort of truth. It allows for automated deployment even across a lot of environments and doesn't really matter how much, that just convict and it's self-documenting. Yeah, provided that the Detection Engineers do a good job and they get flagged with reviews if they don't. And the best thing that I like about it is that there's some certainty from the validation perspectives like you actually know that it's working and it's doing the stuff that you want. So, not the most technical talk but if you're curious about more we can discuss a bit on the panel but we're also having a booth behind here. So if you wanna see a little bit more or ask some questions, I'm more than happy to answer them there. Glad to have thanks for coming out. All right, well we're going to do the panel discussion in a few minutes but the Slido allows you to ask questions for the panel. So if you have questions and you want us to discuss them, when is the panel discussion starting? Yeah, the panel will start in about five minutes. So if you have any questions for the speakers feel free to add them to the Slido, it's in Discord and then you'll be able to make sure that it gets included and here's the QR code if you wanna get access to it. It should be. All right, we're gonna go ahead and get started with the panel discussion. Yep, you have a mic? Oh, I think you might have a mic coming. All right, just a reminder before we get going. If you scan this QR code you'll get access to the Slido and that will allow you to ask questions. So if audience members wanna ask questions you get directed at any of the members of the panel feel free to connect there. We already have two questions that we'll go ahead and make sure that we deliver but we'll have conversations. If you don't have any questions we can make them up. I'm happy to ask. And then we'll go from there. So feel free to hook up into Slido. Make sure you select the proper room. I'm not even, somebody French please tell me how to say that. Salve Marie. There we go. Okay, so go to that and then we'll go from there. So we had two really great discussions two really great presentations that I think really kind of touched on really important factors of detection engineering. So one is this juxtaposition of false positives and false negatives and how you can go towards reducing false negatives while maintaining a reasonable false like false positive approach. And the way that they talked about doing that is to disconnect detection detection firings with alerts and investigations. So the idea is that just because your detection fires or there's an event that matches your detection rule that doesn't mean that you need to fire up the entire investigation process and that allows you to maintain resources. One of the things that is really interesting to me when I'm kind of looking into signal detection theory which is the kind of psychological or the overarching theory behind detection is this juxtaposition of false positives and the idea that different detection problems so in cybersecurity we have one detection problem but for instance the criminal justice system is a detection problem. The question is, is did the person commit the crime? And like medical testing is another detection problem which is does this person have the disease? For instance, and different detection problems are going to prioritize the reduction of either false positives or false negatives differently. So in the criminal justice system we have this situation to where we want to reduce false positives because for instance, convicting an innocent person of a crime is worse than letting a guilty person free. But in medical testing you could say that false negatives are viewed as being worse because for instance not detecting cancer early on for instance is worse than telling somebody that they're fine when they're actually not. And so one of the questions that I think they kind of touched on this from their perspective but I'll go ahead and ask everybody just to kind of get the juices flowing you could say. In detection engineering in the context of cybersecurity do you think that false positives or false negatives are worse and maybe give us a little explanation of why and I'll start with Ola. Oh great, so it's time to think about it. Yeah, now I have to do it on live, right? But I think it stands to reason that false negatives are worse because not knowing that you got breached is always worse than having noise. I think noise can be dealt with up to a certain extent. I think they had a great example where they used indicators on top of detections or in parallel to detection. So the stuff you know that you wanna make sure that it always alerts you build a detection for that's also how I approach it. And the other stuff is it could be bad or not and therefore positive rates are, I wouldn't tune it as much because an attacker is smart enough that he knows how to mimic whatever is happening in your environment if they spent the time and effort to do so. So those indicators, what we built as well is a sort of framework around it where we can quantify every indicator based on all of their entities and their context around it. So we look not only is this a user in our environment but also which rights do they have, do they have a path to any sensitive resources in Bloodhound or all these kind of things. We look them up externally and based on that context we give them a rating and then we do correlation on all of those ratings over multiple time frames. So their false positives don't really make a lot of sense because in correlation they get evaporated anyway. Cool, thank you, Emilio. Yeah, for sure, for sure. Thank you, Ramio. I think it depends on the context. Well, if you have limited resources maybe false positives are the worst because you want at least to treat the most critical things first and if you have a way to handle the false negatives then maybe the false negatives is the worst but if you don't have these resources well then I'd say the false positives are the worst. Okay, thank you, thank you. Yeah, I think it context matters, right? I think Rimi has a very good point. So it depends what you are as well. I'm the only one here who works for a SIEM vendor so my team deploys rules that goes to all of our customers so generating a lot of false positive impact many, many lives, well, analysis lives, not threatened lives and there's also the impact. If we make bad rules that generate thousands of false positive per customers that generate lots of problems. So false positive are big problems. False negatives, if they're known, if we have assumptions or if we know our blind spots then they're less dangerous as well. So if we follow like the ADCS framework from Palantir that can help you negate a little bit the false negatives but in general, yes, not missing an attack is more impactful than having a bit of a flood of alerts. Awesome, awesome. Okay, actually it looks like we've had quite a few questions come in so thank you to all of you in the audience. I'm gonna pick one. I was gonna pick them in order that they came in but I don't think it's in chronological order anymore. So one easy one is, is Yamaha an acronym or what did that name come from? Well, yes, you know, like Yast, yet another system tool or something like that. There's like a trend and I was thinking about yet another something and I have a Yamaha base so I had to make it an acronym with Yamaha so it's yet another mainly aggregated hunt activity. Okay, you're going with like the Yam or yet another, okay. Awesome, thank you for that. Okay, I just realized that they're being upvoted and so it's in order of kind of most upvoted. So again, for Remy and Emilio, what is your favorite indicator? Have, are there any that are kind of like your child? Like you have any baby indicator and it based on that reaction I think there probably is one at least. Yeah, one of the things that I liked about at least one of the, you said there's multiple scoring methodologies but one of the scoring methodologies was basically less frequent has a higher score, right? And I, that reminded me of Nassim Nicholas Taleb's Black Swan book. So it's a book by Nassim Nicholas Taleb and the general idea that the philosophy is is that things that are things that are going to have a major impact on you are by definition statistically rare and the idea is that in general we live in an organized and ordered world, right? So everything kind of makes sense and things that happen frequently almost certainly have to have very small impacts on us. Otherwise we just live in chaos all the time. And so the idea is that if something happens all the time it's by definition not bad, right? And so all the bad things that are possible to happen are like the worst things like his example is like the financial crash in 2008, right? The real estate market crash. Those are going to be things that are going to be statistically rare. So that seems to be, if you're going to take like a starting position of how do you start to score things? It's all about, you know, least frequency of occurrence or prevalence, right? Seems like a really good start. Yeah, it also limits the, it increases the feasibility of actually analyzing it properly, right? Yeah, so Olaf's talking about the ability to actually analyze things. So the, are you, can you expand on that a little bit? Well, if the domain name is anomalous then, and it comes from a small set of processes it's easier for an analyst to determine, hey, this is weird. I should look into it. Whereas if you see teams connecting to Microsoft it's 95% or 99.999 whatever percent, it's legitimate unless it's a sophisticated tech or found a way to actually piggyback off that. Determining if that is actually malicious or not is way harder than the anomaly that stands out. Yeah, so, okay, so that, now you're making me think of something different. So the, in, I don't even know if it's machine learning would be the proper field of study to talk about this but there's this concept called the manifold hypothesis which is you can analyze things in like an infinite number of different, from different perspectives basically. And so the idea of this like, the least, the less frequent something is the more likely it is to have a major impact. The interesting question is, is how do we know that we're looking at it along the proper, looking at the proper features maybe or like the proper variables of that thing to know so like teams connecting to Microsoft that may not be the appropriate way to be looking at that interaction. Maybe there's some other feature that maybe we just don't even know about. Maybe we don't have telemetry on. That's actually the thing that makes it statistically rare relative to all the others. And so that's a, that's something that I constantly struggle with is like how do I know that I'm looking at the right things instead of just what is apparent. There's this concept called the snake detection hypothesis and the idea is that humans or primates in general have evolved with snakes as their primary predator, right? And so the idea is that human, like humans dedicate a large portion of their brain to visual, like their visual acuity. And so the question is, why do we see better like dogs for instance, don't see as well and that type of thing. And the hypothesis kind of posits this idea that the reason is that snakes were our predator and snakes are relatively camouflaged. And so it's very difficult to see snakes. And so that our ancestors, which survived in rougher times basically, they were able to see snakes and actually differentiate them from their environment. And one of the things is that we prioritize movement. So there's all these different things like, for instance, you go on the street corner in a busy city and point up in the sky and then people will stand next to you and like look at where you're pointing even if there's nothing there. And that's because there's this like kind of lizard brain thing to where we start to think, oh, if this person's wasting their most valuable resource on looking up in the sky, then there must be something to see. There's this question that I have of like, how do we, that's a biologically or evolves process like our visual perception, but imagine that EDRs are our perception, right? In the cyber realm. So the question is, how do we know that's what EDRs are seeing is actually what we should be seeing because it's not evolved in the same way. It's artificial. Does anybody have any thoughts on that? Scooby, come on, buddy. This is red diamond. No, but I think you, yeah. So EDR in specific and like all other tools, you cannot blindly trust any vendor, right? There's some vendor that you might trust more or some vendors that you might trust less. But I think Olaf kind of talked about it a little bit in his talk, but you built a detection for something that you are aware of. And at some point, the EDR might catch up and then you can retire your detection. So that's one kind of going in the direction you were talking about. But yeah, I think it's very important to know your threat models, test your security software and make sure that they are responding to the way you're expecting them. And if they're not, you should all your vendors accountable as well. They are the one who provide the detections and they are the one who can improve their product. So in general, if the tool you're using or not detecting what you're expecting, you need to go back to them. If you're building your own tool, then you talk to yourself and you prioritize those things. Yeah, for sure. I live in Las Vegas. So of course, leveraging color to force people to do things that they otherwise wouldn't do is like right out of my house. All right, so trying to stay on the SOAR topic. So maybe this is the perfect question for you. But this is another audience question that says, what are some tips to enrich existing detections with sufficient context to make the triage easier? Anybody have like a favorite data source or maybe a little trick to try to make sure that you're giving the analyst as much information as possible or anything along those lines? Gotcha. Yeah, I think one of the things, I fully agree. And I think one of the things I would add on top of that is a sort of historical overview of the entities that you're analyzing. So if you're looking at a machine, a user, and a domain, I would add the machine and the user at least as a historical overview of how many alerts have triggered on that machine and that user. Because if you have a larger SOAR, it's quite likely that, like, but you looked at alert one, you look at alert two, an hour later, you're not aware of each other looking at it. And in context, in Oracle Relation, those two actually might not be false positives and lead to a bigger story. Yeah, clarifying relations between entities is really important, I agree. One question I have about how you're doing the grouping, have you considered the idea, so there's this idea called composability, which is how things provide, like one thing will provide output, that's then an input to another thing. And so imagine that you have some alerts that's based on some behavior. And that behavior can't stand alone, it requires some sort of prerequisite to be achieved. So for instance, curb roasting is the example that I like to use. So it's like, if I curb roast, I'm not just, first of all, you can't necessarily just curb roast, you probably need to enumerate service accounts first, and then you can run curb roasting. But then I don't just curb roast for no reason, I then intend to log on with the user that I requested a service ticket for. And so there's this idea that's, kind of what you suggested in your talk is that we can potentially look for things with a broader scope, which is somebody requested a curb roast service ticket, which potentially happens all the time. But now I'm waiting for other things to happen that are going to be relevant, that gives me context. But have you considered thinking about it from the perspective of other things that are relevant to that particular behavior as opposed to other potentially bad things? Yeah, but, sorry. We have some insulators that are like super insulators. They are built like it's a correlation between insulators and it triggers if they are like five of these in the same hour or something like that. And then you get like, it raised an alert with all the insulators that triggered and the secondaries can see, okay, this happened and in this sequence. And yeah, we have things like that. And it's pretty nice. We don't have many. We're kind of scratching the surface on this concept, but it's a goal for us to be able to leverage that concept that something A, B, C, if they happen independently, we don't care. If A then B then C happens, it's probably like something that's very bad. So it's something that we're, it's a goal, but we're not quite there yet. So Yamaha is kind of new. So we're still doing our ND on that. It's coming. Yeah, that's awesome. There's two things there. So there's the chain rule. So things that you know should not happen in a certain order or just together. But there's also, you talked in your, you mentioned that I think in your talk as well. There's the entity scoring. So today you're doing this. Maybe you have a score of two because you've done, I don't know, your request a Kerberos ticket. But tomorrow you start logging to this SSL, SQL database that you've never connected to. You start maybe targeting things that you've never touched before. You go to some SharePoints. So every time you do one of those things, the score on your entity or the danger of the likelihood that what you're doing is bad increase. And I think you mentioned that in your talk and that's also. Yeah, we called it inertia analysis. I'm not sure if it's the scientific term. There must be a better term for this. But yeah, basically if the entity has a kind of kind of static score and in the next week, it increases the tenfold and stays like that. Well, perhaps we should investigate that because something new is happening on that machine and it triggers indicators which are indicators of abnormal activity. Yeah, I actually, for what it's worth, I liked the term inertia analysis. So yeah, that was good. Cool. I'm gonna switch to another audience question, this one for Ola. So a little bit of a switch of topics, but how do you convince management to invest resources and detection in this code? Yeah. And he, he is management, so he doesn't have to, they pay him for that. That's a good question. I never had to convince management of it because they, in most cases, management wants visibility in what the team is doing. They want to have some measure of quality control. They wanna have insight in the process and these kind of things are all there and the quality aspect is there, the validation is there. So in most cases, and we work for larger enterprises and global companies. So maybe it's a different, it's a bubble, right? So if you're in a small company, it's a harder sell but even then you can, I mean, the guarantee of consistency is usually already the biggest selling point. And then if you're in a company that is susceptible to audit and these kind of things, then it's even easier because the full audit trail is there, every auditor likes it. Especially if you explain how it works because they usually don't get it directly but it's, I think that those are the easiest selling points. It's like the measureability, the quality, the insights and having a consistency method there. Hi, we're implementing detection as code in our organization as we speak and management was very, very open to the idea but if they weren't, I would have given the argument that we can have dashboards and automated KPIs of detections. Dashboards? Yeah, they have pie charts, everybody's happy to have it. This man speaks management right here. We have about five more minutes so there's a couple of questions that I'm gonna do a shameless plug for my workshop. So let's see, how do you evaluate that you have a good or bad detection coverage? I have a three hour explanation on that coming at 1 p.m. if you're interested in that, at least my perspective. What does the validation process look like for writing a new detection? Also that would be included in the workshop if you're so interested. Let's see, maybe a final question that I'm sure everybody kind of has an opinion on and this is definitely an unsolved problem in my opinion is, how do you- 42, huh? I have an opinion, it's 42. 42? Okay, well we'll see how that ages in a second. So it's directed at Olaf but I think everybody has an opinion. How do you prioritize your detection logic from the backlog to the plant state? So you have this backlog of detections, how do you prioritize? And so we have limited, just like we have limited capacity for dealing with alerts, we have limited capacity for detection engineering and there's way more things that we could be building detections for than we have time. So how do we choose which things? Yeah, I'll try to give the answer as a person working for one organization because as a consultant it's kind of different, I guess. But I would, knowing you have a threat model or are aware of what is trying to attack you, I would have a feasibility of how likely is it that I get attacked by it? It's not mitigated by anything and I can detect this in a meaningful way that our analysts can analyze. I think those three things based on knowing your threat landscape because running after the newest Trader zero day is not maybe the most important in every case. This is another Nicholas, or not seem Nicholas Taleb idea called the Lindy principle which is this idea that things that have existed for a long period of time, you should assume that they will continue to exist for at least the same amount. So like if I wrote a book today, the chances that 2,000 years from now it would still be published as like zero but the Bible for instance was published or written 2,000 years ago and it's reasonable to expect that it will continue to be published 2,000 years from now. So that's kind of the thing of the longer something's been around, the longer it will continue to be around and that's the antidote to chasing the new hotness. So everybody, something new comes out, I don't even remember what the snake, turla, malware thing, it's like if you, people are focused on that because it's this new thing but if you actually look at it, a lot of it is really old stuff that's just being repurposed or wrapped up into something that appears to be sexy, right? And so if you would have focused on the old things, the things that have been around and have been successful for attackers for a long time, you would have been already prepared for that type of situation. What I'm hearing about you theory is that if my backlog is a hundred tickets in the future it's gonna be a hundred tickets. Perfect, it will never leave. Yeah, I think it's difficult to prioritize well but knowing your own environment and your own threats, you can make the best determination based on that knowledge, I think. There's no single source answer there. Yeah, I would say make sure you have the logs. So which logs do I have to actually detect this? If I don't have the logs, that's probably not my priority. My priority is probably to go get those logs but it might, if you're a large enough organization that might be another team than the detection engineering. What's the severity of this attack? What's the impact? And what's the likelihood of being hit? So a little bit like the vulnerability system but you can apply that calculation for your detection as well. And what's the likelihood or the fidelity you're expecting out of it? Do you expect it to generate very little false positive or be very noisy? Is this like a one for one rule? Is it more like a chain rule? Is it something that you built more like as an indicator? That that's more like for threat hunter? So those are all things that you can consider to help you organize your backlog and put the most value on top, obviously. And then what gives you a one for one? You put that at the top, what's more a threat hunting or put lower in the priority? And a whole class could be given as to what is a high value detection, so. Very hard to answer. Yeah, for sure. The one thing that I was interested in, nobody mentioned it, but it's like prevalence seems to be a thing that a lot of people will use. But I'm cautiously optimistic about prevalence numbers just because is that everybody's seen the plane with the red circles on it and that people talk about observation bias, right? So the idea that like if you see something you're going to look for it and then when you look for it you're going to continue to see it. If you've ever heard of something and then gone out into the world and then you see that thing over and over and over but you feel like you've never seen it before that's essentially observation bias and these prevalence numbers, well they are useful, right? Because absolute occurrence is interesting. You gotta be very careful when you're starting to compare things because just because something has been seen more often doesn't mean that it actually occurs more often in reality or has a bigger impact. So prevalence is an input but it shouldn't be the, in my opinion, the primary factor when you're prioritizing. Also something that has not been said, maybe how much of a pain is it to the tractor if we block this technique? So if you just disrupt the skill chain because you block this particular spot that could be something nice to put priority on. So central is, how central is that technique or that behavior to the most number of possible attack paths or something like that would be really interesting. And so it's like, and then also how many different options do they have? So if you're talking about like lull bins, for instance, right, there's a trillion lull bins and so if you block one I'm just gonna use a different but if you're talking about lateral movement, for instance, there's a relatively finite number of lateral movement options and so if I could block WMI lateral movement now you're stuck with services or schedule tasks or there's very few but you kind of start to close people in. I think we're near the end but it looked like you were getting ready to say something. No, well, they said pretty much what I wanted to say. Okay, cool. All right, well, thank you everybody for joining us. Thank you to Olaf, Emilio, Remy and Scooby. Great presentations to all three of you and yeah, looking forward to rewatching those and kind of playing around especially with your tool because that seems like it's fun and it really seems like a capability that a lot of organizations would benefit from, so yeah. Thanks everybody. So the conference will start again at 1 p.m. So you have lunch, go have lunch and come back. Workshops also start at 1 p.m. All right, so welcome to the second block of the Villemarie room. So we're now at the red team block. So our moderator is Martin Zubey. He has been hacking for 15 years in particular on malware development, evasion of defense controls. I'm losing my sheets and process automation. He has been a challenge designer for ACFES for seven years and North Tech for one year. Currently, Martin leads a large ethical hacker department. So welcome, Martin Zubey. Right, good afternoon, everyone. I hope you have a good lunch. Maybe not too heavy because honestly, this afternoon's gonna be heavy in content. I'm very excited to animate this red team block. I think it's gonna be an awesome three conference in a row. Without waiting, let's start with our first speaker. We have Guillaume Cayet. He leads the pentesting team at Okiak. Guillaume specialized in malware development, red teaming and incident response. The floor is yours. Thanks, Martin. Hi everyone, thanks for joining me for this talk on reporting malware analysis and triggering established and novel techniques. So as Martin said, I'm Guillaume Cayet. I'm the team lead at Okiak for the pentesting. I'm a huge North Tech fan. I've been participating for the last few years and it's always a pleasure to be here. As you will see in the next slides, I really like NEMLANG as a programmatic language. So every proof of concept that I will publish today on GitHub are written in NEMLANG, but be assured you can translate them in less optimal languages like Ross, Go, or C. If that was a joke, please don't throw rocks at me. So here's what we'll cover today. But first, where this research started. So obviously when you do red teaming, you need to craft your own payload because if you use anything public, you will get burned instantly. It's not even worth doing it. So when you spend that amount of time creating those special precious tradecraft, the last thing you want to, the last thing you want that to happen as a red team is the blue team getting a hand on your payload and creating a bunch of synergy around it like in one hour or even one day. So yeah, I wanted to, even if my goal is to train the blue team in the end, I wanted to put a better fight, make it more interesting. And in 2021, there's a one researcher that published a tool that I will present soon. And basically it was an N-T-Copy technique. So a file executed on system A could indefinitely work on system A but on system B or on various that all after that, it was not possible to execute it. That really sparked my interest because it didn't explain anything about that technique. So I spent a good amount of time trying to understand what it was doing. I found some weaknesses. Then I created more proof of concept that improved on that technique. So this is what I will present today as for the N-T-Copy techniques. But first I will present some established technique that you can use to prevent your payload from being executed in sandboxes or from being reversed. Then we will talk about detection opportunities and other elements to consider. Established techniques. So the first big category of technique you can use are called guard rails. So the goal of the guard rails is to prevent your payload from being executed in the wrong environment. So it can be either very subtle, a sandbox or just not your right target. There's multiple ways you can go about that. So making sure your payload can check that the system is joined to a domain as a good enough number of CPUs, a high number of screen, as the mouse moves in the last few minutes, as the number of window change in the last few minutes. So all those techniques are really good against sandboxes, a bit less against human because a reverse I can just take a look at your payload, see what you're looking for and just modify his virtual machine. And in the end, he will be able to decrypt your payload, get your shell code, get URLs, secrets, and et cetera. So it has a good point, but some good weaknesses. The next big category of things you can do is anti-bugging or anti-reversing technique. So the goal is to prevent a reverser from being able to take your payload, open it in a debugger like I'm showing here and being able to bypass your basic guardrail. So in the GIF at the right, you can see that I implemented two checks. First, when you execute the payload without any debugger running, it does not detect a debugger. But if you open IDA by default, create few temporary files next to your payload. So I'm checking regularly for those type of files. And also in the PEB of the process, am I being debugged the value that you can check? So those are two known anti-debug or reversing techniques. The more you add in your payload, the less, the more it's painful for the reverser. So the goal is to add as much as possible so the reverser lose interest and move on to other things. However, a dedicated reverser will in the end be able to decrypt your payload because if he has the right tooling and the right knowledge, he can hide itself from user mode. Your payload won't see that there's a debugger. So here's the first and novel anti-copy technique, which is scroll like a king. I didn't develop this. This was created by a address one and was presented at Woodcon and Itcon in 2021. Before I move on, there's a QR code at the bottom right. I test that this morning and I'm not sure it's possible from the crowd to scan it. But just know that at the end, you can just go on my GitHub and find the repos. So yeah, this proof of concept was published in 2021 and what you can see in the GIF is that at the right is executing the file one time and then he's moving the file to a second system and on the second system, it doesn't work anymore. It has what I was saying, it's naturally broken. But how does it work? Why is it naturally broken once it's on the other system? Well, let me explain it to you. When you have an EXE or a DLL, both files are following the PE format and in the PE format, there's something called the import table. So let's say you have a basic program that when you execute it, it pops a message box. So your program to be able to spawn a message box needs to call a function in a system DLL on the windows. For a message box, it's a user2d2.dll. So the import table of your payload needs to have an entry for a message box inside that DLL to be able to fetch the address at runtime. So the text at the right, the two more important thing to know today is the original first tank and the first tank, which are a small part of the import table. So the original first tank is a read-on-e table which contains the function names of each function that you need at runtime. And at runtime, the first table, the first tank table, which is empty on disk, will get filled with the real address on the current system for that function. So that every time that function is called, again, it's the address stored in first tank that will be used. So it's a small or simple example. In the original first tank, you could have a few ETW related function. At runtime in first tank, you will get memory addresses for first tank in the first tank that will get filled in. However, it is true like a kingproof of concept does not use function names. Instead, it uses integers. But what are those integers? Those are ordinals. So if you open any DLL or system DLL on Windows, you will see that for each function, the function name is exported, but also a small integer that is an ordinal. And how it works is that the first function of the DLL that is exported will have the number one, the second number two, and so on and so forth. Normally, everybody call function by their names because this is more stable. The names don't change. However, it's not the same thing for ordinal because at every Windows version change, even for small Windows version changes, the ordinal number can be swapped. So you could have a binary that call a function with ordinal, you do a Windows update, so then it doesn't work anymore. Not really fun for a normal program. Maybe more interesting for payload. And you might see what I'm going to with ordinals. So the screw like a king proof of concept, basically what it does is you have two binary, the builder and the target file. So the builder will load the target files byte in memory, map every section of the binary as an image in memory, like when you are doing P reflection, it will then parse the import table, find every function names, look up the function ordinal on that system, swap the two information so that the import table only has ordinal and then write everything back to disk where the file was. So every time the file gets executed again, there's no function names in the import table, only function ordinals. So it will work on that system, but maybe not on other system which doesn't have the same Windows version. So in the end, it's a really efficient technique, but you need to trust your victim to have a different Windows version number than virus.total, or it will suck. So it's great, but you have to trust your victim in some way and I don't want to trust any target, so it has some drawbacks. It's not effective against the same Windows version. I tried to contact the author, he never answered, but I tested it between two systems that has an identical Windows version. As you can see, even when you apply the screw technique on the first system, like as of right now, calculate, calc still work. On the second system, the same file will run again and work perfectly. So it's a really good technique, but has some weaknesses. So here's what I did. First, no, ever since I like NIM, I first translated that technique into NIMLang. And what I did is I improved it, so instead of needing two binary ones that target the other, the file overrides itself and change itself. So you could send one file to your victim once the file is executed, once the file loads itself on memory, change itself with ordinal, writes itself back to disk. So it's nicer, maybe in some retinix scenario, you don't want to send too many binaries to your target victim. However, it has the same weakness. You don't, it's not effective between the same Windows version. Before I move on to the next technique, there's two concepts I need to talk with you. The first is alternate the stream. For those that don't know what is ADS or alternate the stream, you can look at the image at the right. You can see in that folder, I have only one file. It's nardsec.txt. If I try to output its content with the first type command, you will see the default content. However, with ADS, you can create alternate content in what we call alternate data stream. So if I request the secret alternate data stream, you will see that there's a different content. That's a NTFS function that was created for macOS compatibility. It's mainly used nowadays for web browser. So when you download a file using your browser, that is, even it's sign or unsign, you will get a zone identifier alternate data stream applied to it. So if the file was unsigned, when you execute the exe, for example, you will get that big blue screen that we call smart screen to tell you that, hey, that file is not signed. It's the main use nowadays for ADS. However, every file has a default unnamed data stream that when you create a basic file and you open it in a notepad, for example, that's the default unnamed data stream that you see. The key thing to understand here is that depending how it's done, when you move a file that has an alternate data stream to another system or to virus.all, for example, the ADS does not come with it. So that's very interesting. Keep that in mind. The next concept is self-deletion. So normally on Windows, it's not possible for a process to delete, or even you, you cannot delete a file which is linked to a running process. So if the file is mapped into memory and the process is running, you cannot delete that file. However, John Haslik found a way to do it anyway. So how it works is that your payload gets an end-all on itself. Rename is default unnamed data stream to something named. Close the end-all, reopen, and when he tries to close the end-all again, since there is no primary data stream, Windows triggered the file, delete itself. So you end up with a process in memory that is not linked to any file on this. So here's the first technique I created which is like I call payload rekeying. So the goal would be to have a payload that contains encrypted stuff like shellcode, URLs, other secrets, and have it executed on a system A while not having any prior information on system A. Then I want that file to always execute successful on system A, but never again on system B, C, virus.all, or other virtual machine. Or I want it to be hard to execute it or to decrypt the encrypted stuff. So here's the plan. At the right, you can see a X dump of a binary. The parts in red are encrypted secrets in this case. So you could think of variables, URLs, or other stuff. So the payload loads its own bytes in memory, locate those secrets, decrypt them with the art-coded key. Then generate a new key, store part of that key into ADS or the whole, then re-encrypt those decrypted secrets in memory, then overwrite itself back on this. Few things. So to be able to locate my encrypted secrets, I append and prepend those secrets with a unique pattern that is not commonly found in the byte patterns of a file. Then I will just show you the quick gif while I talk about the rest. In the gif, you can see that I am executing a memory key that you can see on system A. I try to re-execute it, it still works. Then I move it on system B, which has a mounted drive for the same file. Copy it on the step, re-execute it. It does not work anymore, and I also added a function that self-delete when it detects that it is on another system. So for all of that to work, initially, the key needs to be a hash. It's my implementation because when you replace bytes in your payload, you cannot change the length. So Mikey needs to always be the same length, and this is the same thing for encrypted secrets. So you need to use an encryption algorithm which does not add padding or change the length. The bytecode needs to be the same length, so you can use encryption like AES 256 CTR, if I remember correctly. That encryption does not change anything as for the length. So I start the original key, which is an hash, and I start the hash of the original hash. And then when I do the ricketing operation, I generate a new key, replace the bytes in memory for that key. And at every run, even on the first, I validate that the hash of the key is the same as the original hash of the hash of the key. So if that is different, it means that I'm on the second run, it means that I am already ricketed. So yeah, like I said, on subsequent runs, and that thing is checked. And my payload will know that it needs to use the new key and part of part, or the whole thing that is stored in ADS to be able to decrypt URL shell code and other secrets. If that decryption fails, I trigger the self-deletion as you can see in the game. The goal is to give less possible opportunities for the blue team to be able to decrypt the payload. Yeah. So in that case, the payload becomes bricked if the file is moved to another system, or if the file is uploaded to an online sandbox, or if the SOC analyst downloaded your file using the EDR functionality, in all those cases, the EDS does not come with it. If you have an NTFS USB key and you move the file on it, since it's a NTFS function, the EDS will come with it. But if your USB key is FAT32, for example, the secret will be gone. If you delete the file, the secret is gone. If you move it in any, in most ways, the secret will be gone. This technique was nice, but it, again, adds some weaknesses because if the SOC, or if the analysis process took an image of the system before doing the reverse operation, in the end, they will know that I'm looking for a key in EDS. If they still have the laptop or the system, they can get the secret back. So I want them something even better. So here's the NIM DRM proof of concept. The goal was to build on the strength of the last proof of concept, but even provided even less opportunities for the routine to, in the end, be able to decrypt your SOC, the payload. So the approach is that I've completely removed the decryption key from the payload. I'm using an external licensing server. So let me explain the GIF. At the left, you have a license server that runs on system A. At the right, sometimes you have system B. The file is initially executed on system B and can contact the license server on system A. The payload contains a unique license that it sends it to the license server on the first time with many fingerprinting elements like file ashes, hostname, and you can get creative with this. The goal is to, it can look like an anti-cheat or licensing server for games, though it's a really cheap version of that. You can calculate the memory checksum of some important element of your payload to make sure it's not bypassed. You send all that information to the license server. The goal of the license server is to take a decision with what was provided. If it decides to return the decryption key, it will also send a generated secret. The secret will get stored in ADS. I think you can see the pattern here. And that secret needs to be provided on each subsequent run with all those other unique elements, that fingerprinting system. So again, if anything is missing, the license server can return a message to self-delete or do other actions. In this case, the license, the payload gets bricked for the same reason as before, but if you added all those basic guardrails and enter everything technique that I talked at the beginning, so if someone patched the binary to bypass some basic guardrails, the file hash will be different, so the license will get banned. If the process is being debugged for any reason, the license will get banned. And more importantly, if as an operator you feel like it's time to close your campaign, if you feel that the stock is as big an investigation, you can just ban the license or close your license server, and the decryption key will never get returned again. So there's some detection opportunities. The one technique that is all of the proof of concept that I presented today is ADS. So you can use Sysmon to detect that or to lock that. The thing you need to know that is that basic configuration for Sysmon online does not look for EXC creating ADS, except if they are in the download startup or temp folder. There are some other Sysmon configuration like this one from Florian that checks for a very specific alternate data stream name. In this case, this is a cobalt strike beacon object file that is public. But as you can see, you cannot always trust hackers to be consistent. There are multiple other GitHub repo that use other alternate data stream. So in the end, this is not really a good detection mechanism. I initially thought that I had a really good detection idea is that my payload creates an ADS on itself. So if the main usage of ADS nowadays is a browser treating zone identifier ADS, a file treating an ADS on itself would be a really good detection. However, as you can see in this Sysmon image, it's not NimuriKey.exe that trades its ADS, it's Explorer.exe. I have no idea why this is the case and need to investigate more, but on all the tests I did, it was always Explorer.exe. So this is not really nice for a detection opportunity. However, you could really well baseline your environment for any ADS creation. And aside from zone identifier, I don't think there's many more zone identifier ADS creation. So baseline your environment is a really good idea. And the technical try to mimic zone identifier ADS, but without other custom content that will not add much value. Another key stuff to note here, you can see the content. So the content of my ADS was logged. I can guarantee you that I didn't create this Chinese or Asian looking string. So even Sysmon does not monitor correctly the content of ADS being created. So as a conclusion, anti-copy is really fun. I encourage every red team or pen testing to play with it. I also encourage every blue team to start play with it because we never know when tractors start using those techniques. Other aspect to consider will be handling of the key. So when you contact the license server, for example, and get the decryption key, if the file continues to execute for a long number of time, keeping the key memory is dangerous because any blue team can dump your process and get your key. And in the same style of idea, in general, preventing in-memory content carving using custom sleep mask, if you're using a bus ride, for example, is a nice way to make content carving at runtime as difficult. Today I start every secret in ADS because I find it nice, it's volatile, and it's hard to notice, but you could start that secret in other locations like regative and log or the file system itself. However, in my opinion, it offers more opportunities for the blue team to find it in a short period of time. Well, thank you. So again, if you want to look at the proof of concept I released today, they are all in my GitHub at Offensive Teacher, Offensive Teacher, sorry, on GitHub. Yeah, thank you. Thank you, Guillaume. Good job for your talk. It was very, very interesting. Guys, don't forget to send your questions on the Slido. I will address them at the Q&A after that, and we'll be back in 15 minutes at 1.45. 1.2. Thank you. We'll continue this afternoon with this red team block. We have with us a legend. I'm sure everyone knows Charles Hamilton, so I'm not sure. I will really take too much time to introduce him, but just for the fun, Charles is the founder of Ring Zero Team website. He's known as Mr. Enicoder. He has a Patreon. He has been given many talks, always very epic stuff, so Charles, have fun. Thank you, thank you. Can you guys hear me? So I have a tendency to have way too much content, so this is gonna be like 30 minutes for probably three hours of content, so buckle up. We're gonna start real fast. Nothing really interesting about me. Manager at KPMG, we have some KPMG folks, also known as Mr. Enicoder, founders of Ring Zero Team and a bunch of other stuff. Some GitHub code that I wrote over the years and also have a Patreon for like three years now where I produce a lot of content regarding red teaming and a lot of love to share with everyone. So first of all, I guess we're just gonna address this. Difference between red team and typical intrusion tests. I guess the key point here is that typical pen testing will not assess the blue team effectiveness, right? So we're not trying to see if they're good at detecting you or anything like that. We're usually not gonna do any social engineering and stuff like that. So when we go to red team, this is slightly different, right? We wanna actually assess the detection capability and responsiveness capability of a company because of that, usually we're gonna have to deal with some security products such as EDR, antivirus and stuff like that. So that's the major difference from a client perspective and also the tester perspective, right? We have to deal with those products. And today we're gonna definitely focus on one of the phase of a red team engagement. There's much more to it, but lack of time. So we're gonna focus on gaining access, right? You have to have your code running at some points and we need to find a way to evade those products as red teamer. So for those of you that are familiar with typical C2 framework, you probably have seen something like that in the past. This is typical shell code. This is something that you're gonna see with meterpreter, cobalt strike, or any other of the framework that are quite popular. No need to tell you that EDR and AV are really good at detecting those patterns. Those bytes are known forever. I just wanna point out the MZARUH. Just keep that in mind for the future. We're gonna have to go back to those bytes a bit later on. So this is well known. This is highly predictable. Of course, from a detection perspective, this can be detected easily. There's a lot of stuff on your net about evasion and bypass. Nothing fancy here. Personally, I love to not follow the rules, right? If we look at the typical tricks that's been used over the years, you're gonna come across XOR loop, which is super typical. It's like the dummy encryption kind of thing using RC4 because it's a super simple algorithm to implement AS2 or GZIP base 64, layers of crap on top of your shellcode to actually achieve evasion. There is a big issue with that. Most of these will end up creating what we consider encrypted shellcode, which brings me to this next slide, where entropy is a thing. And I know that there's someone that's gonna present or had presented a talk about entropy. Long story short, high entropy, which means that the sample is highly random is bad. They know that there's a lot of encryption and legitimate software tend to have really low entropy in the sense that there's a lot of pattern that repeat themselves, right? So if you encrypt half of your payload, there's a good chance that your entropy will be terrible. You would definitely wanna avoid that. So you wanna make sure that your code look as repeatable as possible. And one of the key things that I love to do instead of encryption, because as I mentioned, I like to not follow the trend. What I do is fairly simple. I'll create a dictionary of words, whatever you pick 256 word at random, and then what you're gonna do is you're gonna transfer all your bytes into the offset associated with the word in your list. So what you're gonna end up with is something similar to this, right? Let's say your shellcode is zero, zero, two, zero, one, zero, zero, zero, one, and you have your table that contain all the offsets. In this case, we only have three because we only have three different bytes. So table first, second, and third. And the mapping of the shellcode will be converted to first, third, second, first, first, second, which are actually words, right? So you may have realized by now that we converted the shellcode into actual words, which is quite useful, right? Especially when it comes to entropy, you're gonna have a pretty decent entropy because these patterns are gonna repeat over and over and over and over on your cell phone. Especially if you're looking at a stage less payload, you're gonna end up with like a three megabyte symbol with a bunch of word that are less on their own. Now, how you actually deal with this? Lately, I've been using a lot of C-sharp for my payload, so you can literally just use a built-in C-sharp capability, which is array index of. The point is you're gonna provide your needles, which is whatever bytes you're trying to map back to, whatever it's in your table, and you're gonna get the offset, which will eventually be converted into the byte associated with the original shellcode payload. So if you look at the final code, if you implement this in C-sharp, this is pretty much what it's gonna look like, right? Overall, nothing really fancy. You have your table, your mapping, and the loop here that actually just convert back the index into this byte array, which contains the final data, and then the typical memory allocation, virtual protect, martial copy, and just executing the function pointer to actually point to the shellcode. So this is pretty standard stuff. The only major difference here, I'd say, is really the use of word instead of something else, but you can be creative. The whole point here is avoid encryption. Like this pattern is not gonna be as detected as any encryption pattern, because this is not something as popular as the other type of code that we see. We need to understand something, too. It's really important, right? When we talk about EDRs and antivirus solution, when they actually detect that kind of code when you drop your payload on this, they're gonna check for signature, right? It's really important to understand that the compiler will do a lot of magic for you under the hood, and you need to understand that sometimes the compiler will lie to you. We're gonna have a brief look at what the compiler may actually do for you. Keep in mind that the same concept can also apply to C-sharp. C-sharp have a metal language called MSI, which is the up-code version of whatever you're gonna write in C-sharp that is interpreted by the .NET environment. So just a quick example. If you look at GCC, for example, for standard C-code, right? You can just compile it. If you're trying to use the optimization switch, in this case, the 01, you actually ended up with totally different codes. So if we look at this example here, we have this piece of C-code that doesn't really do much, right? It's just a XOR loop, typical stuff when it comes to an encrypting shellcode, right? This is obviously a weak encryption, but the whole point is about just to hide what you wanna have executed memory later on, right? This is the assembly code of this. Keep in mind that at the end of the day, what's inside of your binary are those bytes here, right? So they can build signature on that. Let's say in this case here, you have the for loop with the XOR-OF, which is actually living in here. So they could actually build a signature on those bytes here. The point is you wanna obviously modify those bytes. That's kind of the whole point. So if we look at the same code with optimization one like I showed earlier, you'll see that we have two different pieces of code, right? Notice here that the XOR is now here. You have this byte here, which is 8030OF and the original one was 83F10F. Depending on how the compiler did its magic, it totally changed the code and the signature is totally different. Just keep in mind that sometimes using optimization may change your code. When it comes to writing malicious code, it may also affect the way that the code behaves because you may actually try to trick it. But this is really important to understand something. The compiler is definitely smarter than you. Sometimes you're gonna write super complex code when you're gonna compile it. Well, it's gonna end up being the exact same code that you had at the beginning. This is an example of why the compiler is definitely smarter than you. It's gonna try to guess for you. So if you declare a variable like I equals one and you're trying to assign XOR I into the variable A, well, it's just gonna figure out that A will contain whatever the result of one XOR with I is, right? So they're just gonna remove all that code so it's not gonna change the signature. This is something to keep in mind. The trick usually, when you wanna make sure that the signature will be different, is to make sure that you don't have any static value in there so the compiler cannot predict what the value's gonna look like. So here's an example where we have some code. It's the same thing as what we saw earlier, right? This is the more complex version of this code which is exactly the same result at the end. It's just that we use much more complex code. So as you can see here, we have the dword key equals dword size XOR with dword size. This is a typical trick. It's actually gonna zero out the value and you may believe that, oh, this is cool. I'm gonna change the signature of my code with this XORing technique. At the end of the day, the compiler doesn't know what's inside of dword size but it knows that this operation will turn out to just create a value with zero. And then you have all the magic, we do plus plus, which is gonna increment the value by one, bit shift thing by four, and then reducing it by one. At the end of the day, this is gonna end up generating OF. So you may think, oh, that's pretty cool. My code is super smart. I hide it, the fact that it's XOR with OF. Well, the compiler figured it out for you. So as you can see here, you ended up with the signature XOR, the byte which is part of the loop with OF because the compiler was able to understand what's going on in there. If you look at the code on the right side, pretty similar. The only difference isn't the key in this case here, which is dword size is actually unknown because we don't negate it with XOR or something like that. So technically, the compiler doesn't know at this point what it's gonna look like. However, it's still smarter than us. It figured that this whole operation will end up causing a not, which will actually revert all the back. It's a not instruction, so this was replaced with a not on this specific register here. At the end of the day, it's still a victory for you because you ended up having a different signature. So it's just a matter in this case of using different piece of code. You change a line or two and the whole signature is gonna be totally different. So main takeaway when it comes to obfuscating your C code or C-sharp code, look at the actual final result, you may be surprised. So if we go back to the original code, which is the one that we have on the right, and the fully optimized one, well, you're gonna realize at this point that the signature is totally different. At the end of the day, those two pieces of code will yell the same result, but signature-wise, they're totally different and it's really important to actually assess the code that you generate through your compiler. So this is in our example here. If we remove the optimization on the last one that we saw, because keep in mind that the one on the right was also optimized when compiled. If we remove it, you'll notice here that it's much bigger. So we had the original 83F10F and now this whole operation here has been modified for like a sub-EAX XOR. I cannot really see anything, to be honest. And here is, but the XOR is 31C1. So at the end of the day, totally different signature, right? Just by modifying a bit of your code, right? The downside with that, it's pretty annoying, right? Let's be honest. If you have to rewrite all your tools to make sure that you obfuscate all the C codes or whatever language you use, it's gonna be painful, but keep in mind that this is doable, especially if you maintain your old tool set. That's why I usually love to write my own tools. So I have full control over the source code and I kind of understand what's going on to a certain extent. There's that. So let's just go back to this original code. For those of you that are not necessarily familiar with most of the modern C2, they have a concept which is called Sleep Mask, right? Sleep Mask is the ability to actually encrypt the data in memory. So when the beacon or the agent is not doing anything, it's gonna sleep in memory encrypted. There's no point of actually looking at that piece of code because it's not doing anything. So they hook the sleep method. There's multiple methods that you can hook. But the point is, when it's living in there, it's gonna be totally encrypted. So if there's memory scanning going on, well, they're not gonna see anything. However, there's a catch. Most of the code like that have no effect when it comes to memory scanning because if we look at all these engine work, you're gonna end up actually copying the code in memory at several places. Just a quick hint, final here. We're definitely gonna have the final shellcode in there. It's gonna live in memory. That's gonna be the original shellcode, right? The Marshall copy too. It's gonna copy this into an unmanaged variable that can be later reused to call. So we at least have two variables that actually contain the clear text shellcode, which is definitely something that you wanna avoid, right? And you can always validate that by actually dumping the process. So if you just dump a process and you search for patterns, remember the M-Z-A-R-U-H that I mentioned earlier, without surprise, well, we have two instance of it. The reason why we don't have a third one is because the agent itself is actually encrypted at this specific time, right? So that's the expected behavior. That's the whole point of a sleep mask. So million dollar question, is there a way to have an invisible shellcode within your process? Well, the answer is yes. Here's the trick. It rely on trying to get rid of the final content and the allocated content later on. How can you do that? Well, there's two things. We're gonna have a thread that's gonna be spawned. So the whole point of having a thread is you have execution in parallel to another flow of code that's running. The whole point is the whole shenanigan of memory copying and execution is gonna be inside of that thread, which I call let's have fun for whatever reason. And then inside of your main, you're just gonna call that thread. So far, nothing fancy. And then just to be sure, I'm also gonna modify those variable to make them global to the class. So I'm sure that there's no copy because there's a bunch of magic going on, especially with C-Shot, where you have clone of memory and stuff like that. So if it's inside of a thread and it's not necessarily global, you may end up with an extra copy of code without even knowing. Once again, curiosity, here's the key. Feel free to just dump your process and analyze what you see in memory. You may be extremely surprised of what you'll find in there. So by making them global, it's gonna definitely help you, right? And for those of you that know me, I love C, not the best at writing C, but I love it. So here's the million dollar solution. There's a specific keyword in C-Shot called fix where you can actually use raw type, like buy pointer. So everybody says there's no pointer in C-Shot. Well, technically you can cheat and everything is a pointer live, but that's another story. The point is you have your main, you call your thread with let's have fun, so you're gonna get your beacon running in memory. Then you're gonna sleep for whatever. I usually put 12 seconds just to be safe to make sure that it's gonna happen. Then you enter this unsafe piece of extremely terrible code. But the point is you're gonna generate a string. In this case, it's just gonna be the length of whatever the shell code originally was. And I'm gonna replace this with a bunch of CC. You can just randomize that, be creative, doesn't matter. And now just for the fun of it, I'm gonna retrieve the raw pointer to the actual data that we created earlier. So PTR is now the raw unmanaged pointer to data, which is the managed environment variable. And what we're gonna do is fairly simple. Keep in mind that the thread is running in parallel. At this point, 12 seconds later, well, we definitely have the shell code in memory. It was copied by the whole framework associated with whatever payload you use. And we move back to let's clean up that memory. It's super simple. All you do is you copy that string back into those two locations. Because at this point, the execution was completed. The beacon itself copied itself somewhere else so you don't need the original payload. You empty them with this and you move on with your life. And if you actually check what's going on, extra point, you can also even change back the memory permission because this is also well known. Read write executable memory will get you detected depending on whatever product they use. So you can even revert back the permission on the code to a typical executory which is what you will expect to see, right? And you're done. And if you actually dump that process memory, well, all you're gonna find is a bunch of CC. So the beauty of this is there's no trace of your shell code because at this point it was modified in memory. So keep in mind that whatever tools you use, it's always important to make sure that you do your cleanup. And most people don't actually think about that. And memory scanning is getting more and more popular because we're getting more and more creative. So if you don't clean your original clear shell code after the execution, well, there's no point of using fancy technique. They're just gonna find the simple and memory, simple as that. Keep in mind that as I mentioned, same can be done in C-sharp, right? MSI is like C. C is also like the human version of assembly. Same goes with C-sharp, right? So you have the upcode associated with a bunch of instruction which are IL instruction, which is the middleware kind of thing between C-sharp and the engine and the upcode is what's understood by the runtime environment associated with C-sharp. So we can also technically obfuscate these, right? The beauty of this is there's built-in capability that allow you to actually get the IL code of a method. So not gonna go to too much details here, but you can use a sample, like let's say you created your malicious payload and you're gonna get all the IL bytes. The point here is you can actually obfuscate them, right? By extracting all the IL, you can actually in this case here use a simple technique like a plus three. So every bytes is gonna be added plus three so it's gonna become gibberish and it's actually not gonna point to the original one. The point is you can actually patch the original software. So instead of obfuscating the C-sharp code that you write, you actually obfuscate it a layer above and closer to the actual engine execution and you're gonna get that MSIL obfuscated code instead of a typical C-sharp using random string and AS tricks or anything like that. At this point, the code is not even functional within the binary. And the beauty of it is what you get after that is you actually hunt for your own method that was modified here. There's a pattern, so obviously I just need a couple of bytes that are unique enough at the beginning of my modified code. In this case, it's O3, 75, 8A, O3, whatever, this whole thing. And what I'm gonna do is gonna read my own process memory. I'm gonna loop through all the memory until I found that pattern, knowing that it was plus three. Well, all I have to do is when I found the actual location is to revert it back to minus three, right? So you're using unmanaged functionality within your managed code to modify the managed code back to the original state if it makes any sense. Hopefully it does ish. And then the beauty of this is you get to the point where you have your MSIL code back to its original state and you don't have any C-sharp obfuscation because you're level above, so no need to use fancy variable shenanigan or trying to bypass AMSI. You don't care, you're just below that, so they're not gonna understand any of that shenanigan because it's just gonna be a bunch of random bytes, right? But there's something to consider too and that's something that people tend to oversee a lot, right? That's cool, you know, you obfuscated your stuff, you have plenty of really interesting concept going on, but the thing is C-sharp actually gonna leave a bunch of artifact in your code that people tend to really forget about. One example here is the PDB. It may sound silly, but for those of you that actually do red teaming, you may actually end up being part of non-malicious database just because of the PEB that you use all the time. If you're using the same VM over and over and you have this, you know, random username or anything like that, you may actually be there because by default it's always gonna be part of your assembly in .NET, so that's definitely something that you need to consider. There's a lot of tools available for that, but you should always clean that. Another really important thing is you can use all the obfuscation in the world. Every time you actually have a function that are used, even if it's C-sharp or unmanaged function, they're gonna end it up being unclear in the assembly. So we have an example here where the sample is actually gonna do assembly load and when it performs assembly load, it's gonna invoke a specific set of method within the framework, right? So you're gonna end it up with something like MS-squarelib, load, invoke, GUID, attribute, debug, all that stuff, which is known pattern, right? So you may actually come up with a super cool evasion technique and be like, but I got detected right away by, you know, MDE or whatever, and me maybe wondering why. It's probably because your imports have some well-known pattern and you don't even realize it. That's also something to keep in mind. So if we go back to this kind of code, keep in mind that we modified the MSIL, but we could also technically modify more than that, right? We can get the offset of all of that and make sure that we actually also modify it. So at runtime, it's gonna be patched back to the original code, but when you look at it on this, it's just gonna be gibberish, right? That's something really important to also consider. It's like, for those of you that are familiar with C, you have the concept of import table. It's pretty much the same, right? There's a lot of product that use your import table to actually create signature. It's the same, right? There's different technique and patterns that you're gonna find when you look at malicious code and this is no exception, right? So make sure that you also assess the final binary to actually get this kind of output. Does it look like something that we know is known to be malicious? If the answer is yes, well, you may have guessed it, you probably have something malicious going on. Same goes with the GUID. There's a GUID associated with your installation. Funny enough, I know cases where legitimate APT group were uncovered because of that. They found that the GUID was used in a malware sample and also a legitimate software because probably the guy forget about it and he ended up compiling malicious code on this production machine or whatever. So this GUID is also the key. Keep in mind that you either replace it with gibberish or you make sure that you make one that looks legitimate but it's not yours. Especially for red teamers, probably have one VM, you generate the same code over and over and over and if you test the same client or your internal red team practice, well, they may ended up starting creating rules just based on this GUID associated with your visual studio environment and they're gonna get tech you. Doesn't matter what you're gonna do. They know it's your signature, it's unique for installation so that's also something to consider. Point is you can have super complex fancy code with super advanced technique but if you forget about those little details, well, you may actually get detected right away and funny enough, right? If you don't believe me that these are actual use pattern, if you take a look at Microsoft's security product and memory, just a quick note on that. If you have the trusted installer privilege, you can do pretty much whatever the hell you want with Microsoft detection product so you're allowed to actually dump the process memory and if you look for a certain keyword, let's say PDB, within the memory. Well, obviously it's a bit of a mess though but you realize that these are some of their patterns that they know to be malicious, right? Of course, some of them are obviously malicious like QZminor, SourceRelease, Bitcoinminor.pdb. It's pretty obvious that it's probably a Bitcoinminor but there's some that may look a bit more subtle like rubbish DN loader RC release. It's probably just another tool and they realize that this pattern is always there because every time that group or that individual create a malicious sample, well, they forget about the fact that they have this specific user. It's like here, for example, job goals, release loader. Like loader can be literally everything but it was unique enough that they were able to actually make signatures. So back to this kind of code here. Once again, you can come up with some interesting technique and I've been told that I have five minutes left. Thanks, you, Martin. You can come up with a really cool technique and realize that they caught you way before they got to your fancy code because you forget about those little details. Another example, release shellcode, out of question. How many times do you think you name something shellcode in your life and forget about the fact that you name it shellcode? Probably a million times. This is an example, it's in there. They're gonna cut you just because of that. Doesn't matter what you did afterward, it's there. So this is something really important to keep in mind, right? At the end of the day, as I mentioned, it's always about being creative and making sure that you check what the code that you create is gonna result in your binary, right? This is a typical reflective loader in C-sharp. It's super simple, it's literally three line of code. You can get unmanaged code in memory or whatever dot in assembly. This is something that I love to use. However, the pattern is super known, right? So if you look at your data in your XC, remember earlier we saw the MS Core Lib load, App and Get method and all of that, it's always gonna be there. So you need to actually make sure that you get rid of this. Another thing to keep in mind, right? This is a bit small, but the point is this is something that I do often too, is heavily confiscating my code. I have a little tools, if you're interested, I'll be more than happy to share the concept afterward because we're running out of time. But point is I'm gonna use a switch case concept. So it's all automatically generated for me. So it's gonna take a function or a method that I provide and it's gonna create this crazy switch case statement. So this code is massive. It's literally the same as this at the end of the day. The only difference is all the gibberish I was added. There's a bunch of check, counter. There's magic going on just to make the code more heavy. However, here's the catch. Doesn't matter how complex it is. You still have the MS Core Lib load pattern in there because this specific method has to be called like that. So there's nothing that you can do unless you hide it, which bring me back to something like that where you definitely wanna hide the code and don't forget to hide what I like to call the import table within your EXE when you're dealing with .NET because patterns are easily detectable there. Another thing that you can do, if you look at the app code, as I mentioned earlier, this is the app code, so the equivalent of assembly and C-Sharp, the small version is just that, right? The switch case, one that I'll show you is this one. It's definitely bigger. So this help, right? Pattern matching is gonna be different, but if you look inside of that, you may come across part of that pattern too. So definitely it's gonna make pattern matching much more difficult, but still something that can cause you problems. So officiating on top of it will definitely make sure that this pattern is not gonna be present in the bigger, more complex function. I guess at the end of the day, I've been saying this for a year. I believe that officiating and evasion is really an art and I think lately in the industry, people are trying to come up with super complex technique, but at the end of the day, sometimes they just forget the root, right? It's like little details will get you cut. So before you're trying to come up with some complex technique where you're scanning your own memory, patch MSIL code and all of that, make sure you actually understand what's happening under the hood. When you generate a C-sharp payload or just a piece of C, what are the artifact left in there and hopefully, well, do they achieve the goal of showing you that there's a bunch of little artifact left in your code and I guess be creative and enjoy life and that's it for me. That was fast. Two minutes left. Wow. Any question for Martin, I guess? Just yell. Perfect. This, whatever he said. So are we taking question or we're just leaving? Okay, so apparently we're gonna have a Q and A at what time? At three something, there's a Q and A. If you have any question, I'll be more than happy to answer them while you guys exit the premise. Hopefully you had a good time. I know I did. Qui te voyait pas? Oops. Je sais que tu es ton nom, c'est peut-être un. Je te pense. Bonjour, c'est bien. Qu'en peu de temps. Alors, mais ça commence avec une spéciale, comme pour moi, tu sais, moi, mais je pense. All right. Welcome back. Welcome back to this third talk of the red team block. We'll have, as a speaker, Laurent Desauniers is an amateur in most things. CTF challenge designer, speaker at a few conferences and also Balenciaga model. So hi, everyone. Thank you so much for being here in such a large group. So today I'm gonna talk to you about deception for pentesters. First let me introduce myself. My name is Laurent Desauniers. I am the vice president of bridge detection and response at GoSecure. But first and foremost, I am a Nordsec challenge designer. It's my eleventh year as a Nordsec organizer as well as a magician in pickpocket. So today I'm gonna talk to you about magic as well as about pentesting. And the question you should ask yourself is why? Like why am I here talking to you about pick a card or these type of things? And in truth, my magic improved my pentesting. What I found out was that once you understand how the mind works, how can you capture the attention of someone or make sure they don't look at something you'd like, then suddenly you're better at lying, you're better at deceiving, you're better at crafting the right phishing email that suggests just the right level of stress without being too noisy. So basically today I'm gonna talk about you regarding magic and how it can make all of you better pentesters. And I kind of felt clever thinking about this. But turns out other people had thought about it before. There is a freedom of information act that you can request in the United States where a famous magician coached the CIA in deception techniques. So if it's good enough for the CIA, I figure it's good enough for us. But even, let's go even further. Jean Robert Oudin, also known as Oudini during the Napoleon Wars, helped Napoleon trick the Dupropaganda and do psyops using a trick called the heavy light chest. And that was used to convince population that Europeans were better and that magic was real. So if it's good, like if we look at these things, I mean there's a background where deception and magic intersects. So today I'm gonna talk about a few things. I'm gonna talk about convincers, suggestion, repeatability, repeatability, repeatability. Don't run if you're not being chased, managing attention, stuages and accomplices. And while it's exactly something that's about magic, about surveillance, surveillance detection, how to tail someone, how to surveil people and common mistakes people do when they're doing tailing people. All right, so let's go. So the first thing, the first principle you need to know about magic is that people doubt others but don't doubt themselves. What it means really is, if I tell you I'm a plumber, you might doubt. But if you look at me and you say, oh, he's got this, he's got this, shoot the car, there's a problem with my faucet, I called someone, then you're more than likely to believe that I am the right person. So if I tell you something, you can doubt it. If you deduce it yourself, then you won't be as likely to doubt it. So a convincer really is like a game where I wanna give as many hints as possible at what I'm doing here, what I'd like you to do in these type of things without actually telling you. Because if I tell you, it doesn't work. So that's called a convincer. Of course, if you talk about your convincer, it doesn't work. In magic, you'll never see someone say, look at my thumb, it's a totally normal thumb, I have two, there's two sides of the thumb, here's a thumb, it's a normal thumb, look at this thumb. Like, that doesn't work. I mean, they're gonna show their hand, but the moment they say it's a normal thumb, of course it's a normal thumb, like, we all have two, well, most of us. So it's a type of thing that, of course, if you need to say it, then it's not as good as a convincer. And if you had one rule to recall, that's the one. Like, people are better at lying at themselves than are better to, like, it's much easier to, people don't doubt themselves, but they're likely to doubt others. So when you're crafting fishing, you are much better not to say something, but let them deduce it. Believe that they're smart, more than likely, and hint about things, but give as many hints as possible. Let me give you a few examples. This one I already talked about, the lanyard attack. So, if I have a lanyard of, say, EY, EY are famous auditors, and I wear my EY lanyard, it's when, like, people will say, hmm, this guy must be an auditor. Of course, I mean, who else carries lanyards or who's crazy enough to collect lanyards? So, of course, in that regard, by having this, I mean, nobody will never, ever only let me get in because I have a lanyard, that doesn't make sense. But it adds credibility and people build on it. But some things are even simpler, and the following attack my team did this year is the very fabled hot dog attack. So, some places they have, like, favorite spots and favorite restaurants. So, we found where people ate, and in this case, were hot dogs, and we purchased the same hot dogs at the same place, and we walked with them as they came back in the office. Now, of course, nobody will ever see, say, hmm, I trust this guy. He has the same hot dog brands as we do. Like, that never happens, right? But the thing is, they might say, hmm, I don't know this guy, but he works for us. Obviously, he knows our secret hot dog spot. And it really worked. Like, people find the hot dog attack stupid, but it works. So, keep in mind, convincers doesn't need to be fancy or complex. All they need to do is reduce the risk about somebody doubting you, because you gave a hint that you're part of the in-group. So, lanyards, uniform, way to walk, way to talk, are all things that are common as convincers. So, convincers are a way to reduce path in your mental mind. Like, you know, when there's a path of possibilities, and you, convincers are used to remove these paths, remove that perhaps he's an outsider. But what if instead you wanted a person to perform something? If you recall the principle, if I tell you to do something, you can doubt. Whereas if you convince yourself, you're more than likely to do it without understanding that I told you about it. And of course, there's ethics here, but you know, if I say, gee, the dishes are due, and we may have visit any time, I'm not telling anyone to do the dishes. All I'm saying is, dishes are not done yet, and we may have visit. The person might be more inclined to do it without me telling that person to do it. You understand the difference? And since the desire, or the understanding comes from them, comes from within, then suddenly, they're more than likely to do it. So that's the first principle of magic. People are the best liars to themselves. Does that make sense? All right, I'm going a bit fast because we have plenty and plenty of examples. Repetition, that's a common thing in magic. First time is exciting, second time is normal, third time is routine. Magicians do a thing where, you know, they would show something in their hand, put it in their hand, put it back, put it in their hand, put it back, put it in their hand, oh, it's empty. Why is that empty? Well, because the first time you were looking in the hand and your brain said, oh, yeah, that's, there's something happening, but nothing happens. I take it back, and I put it again. Now your brain says, mm-hmm, I didn't get cut the first time, this time is right. But no, this time again was true. And third time, your brain was low, they do sense of repetition, and then coin is gone. And repetition is a way to build trust because it's normal. Like, you have a model of awareness and things that are repetitive are suddenly boring in your mind, erasism. Of course, it's a mechanism to reduce suspicion, as I mentioned. For example, asking for information. You know, you're performing a social engineering attack, you need to know something since important. The first time is abnormal. So suddenly, that's not when you should ask your question. If you want somebody's password, the first time you call them, don't say, hey, what's your password? That won't work. Instead, how about you call and give information, share something, none come at all, like there's a meeting about something, you know, the first time is where you have the most attention is where you should be the most mundane. Second time, you reinforce that model. So again, you're none come at all, you're asking for something that has no impact, and the third time, then you go for the kill. And that's, by creating patterns of three, it's much, much easier. Let me give you an example. We had to perform a, I'm really happy by that example, so I'm gonna take like three minutes. We had to perform phone phishing. And phone phishing is super difficult. One, it can't be automated, two, depending on the personality of the person, it's difficult, so we had to craft a scenario. Here's the following scenario. Hello, I'm calling regarding a survey or regarding your IT services. Do you have a minute for me? One, they will say yes. Two, it's boring. If you're having fun doing social engineering, you are failing. Most people, when they call you, they're not having fun. If you're like, hey, I'm super happy to talk with you. Can I interest you in insurance? Like, it won't work. Those people hate their lives. So, do the same, do the same. So, I'm calling, saying, hey, I'm here for a survey, first question. On a rate from one to 10, how are P about, how are P about the IT support? This is non-committal, like, they're gonna answer seven, who cares? Question two, from a scale of one to 10, how are P, are you about the material that you're being provided? And then I give an explanation saying, in time of COVID, any items that belong to you were not provided by the IT company, so please disregard these. Whatever, they give you four, five. Question three, from a scale of one to 10, how are P, are you about your network speed? Whatever they say, huh, interesting. What's your speed? So, from the way it's made, the fourth question is not part, it looks like improvised, like, huh, what's your speed, interesting. And when people recall the discussion, they will recall the three question, not the, oh, by the way, it seems like an afterthought. So, what's your speed? I don't know, whatever. Oh, you don't know, no problem. Can you go on evol.donkhoekmi.hackmi.ca slash evol.exe, we are gonna run a speed test. And here are the way it works, why it's so clever. Some people knew that this was wrong. They knew by then that there was a problem, but they had said yes three times. They had committed, they had said yes, they gave information, oh, I don't like, you know, the background or my mouse doesn't work, whatever, so they had involvement in the discussion. So, once they had their arm in, it was too late to say, oh, I don't feel comfortable downloading your EXE. So, we created trust, we created repeatability, and at some point, we did interviews and they said, I knew it was bad, I knew it shouldn't have downloaded that virus, but I did, because it would have been an awkward conversation otherwise. So, and I swear, like, this scenario is using repetition, is using trust, and is using what's called afterthought, or side thought, where the important part doesn't look important, and I'm gonna talk about it in a minute, that make sense? All right, same thing used by politicians and pollsters all the time, the three yes rules. If you want somebody to do something, ask three questions that will end by a yes. Are you using Windows? Yes, cool. Do you work at this place? Yes, amazing, and are you having IT problems? You want a yes. But by creating a three rules that end by a yes, people are more likely to also start with a yes. Does it work all the time? Of course not, but it really, really works, and politicians and pollsters use it all the time. Some famous German politician in the 1940s was very famous for using this rule. Now, this is, so the whole reason why I started this talk was because of the following situation I'm gonna share with you here. The magic principle is called don't run if you're not being chased, and what it means is this. So we're doing an intrusion at a site, and we're saying we're a plumber, and we get caught. There's somebody like challenging us. If I say, I am a fourth generation plumber, my mom was a plumber, she went to the International Plumbing Institute where my father is a copper plumber, I'm such a legit plumber that I named my kid, my dog Gasket, and my first kid is gonna be called T-trap. Well, you know, most people don't talk like that. You know, most people that I'm gonna say, are you a plumber? They're gonna say, yes, I'm a plumber. That's it, they're gonna talk about, like, their name are their kids or anything, but pentesters oftentimes feel guilt. Deep down, we're good people, and we're told that lying is bad. So, since we're feeling guilt, and there's risk, we want to over-explain, and that's a common defense mechanism. We all want to be good person. So, like, people, of course, naming your kid Gasket is a bit of a stretch, but the idea is still this. Oftentimes, you're gonna see, especially pentester, over-explain. While just saying, I don't know, it's such a powerful word. When you're being asked, like, where are you doing here? I don't know. And oftentimes, the people will have to do the processing power on their end. It's like Judo, you're using their cognitive power against them. I don't know. Oh, you're here for this? Yes. You know, so, but you need to do a pause. Like, when you do, I don't know, stop. People will feel the blank. But these are the type of thing where, if you run, if you're being chased, you're gonna look suspicious, if you're not being chased, you're gonna look suspicious. And I've seen this over and over and over in physical pentests. So, again, you must be comfortable and not feel guilt when you're lying. Now, how do you know? How can you avoid feeling guilt? It's easy, like everything else, practice. Next time you're in your taxi, you don't have kids. Talk about your kids. Mine are two and four. Like, don't lie to people who are important or you're gonna see again, that's not gonna work, you know? And, like, don't invent things that don't make sense. Like, there's no one gonna be like, oh, I'm an ex-marine and now I am a Olympian. You know, it won't work. That's not gonna happen. So, but, you know, be mundane. You work in an office. Talk about life at the factory was difficult. Or perhaps invent that you're a fax repairman business doesn't go super well except in hospitals. Or, you know, find something and learn to lie and you'll find out that people don't care. And it's like a super sad story because we're all unique snowflakes. But when you think about it, most people you'll never encounter again in your life and they don't care. But you need to practice lying. You need to practice just the right level of information. So, you know, and, lie, the taxis are wonderful because they meet people and you'll never meet them again. But you need to learn about lying and understanding that all this over-explaining comes from guilt. Yeah, keep things simple, I'll re-explain. Now, managing attention. There's this amazing book. It's difficult to find called Leading with Your Head that explains how the attention model works. Like, there are things that are called active versus passive positions. If I'm talking with you like this, you know I'm interested, you know I'm looking at you. Whereas, if I'm like this, you know, I mean I might be listening or thinking about whatever I'm eating tonight, right? It's just my body language. So, the idea is, people will care when you care and people won't care when you don't. So, if you need to copy a card or something, if you're like this, man, well, of course, there might feel something's worrying, right? Or, if you're giving information and you want them to focus on something, look at them. Like, be active in your posture. Whereas, if you want someone not to look at something, don't look at it. You know, I oftentimes see people copying cards and being like this and looking. Of course, people are gonna look where you look. Like, we're humans, we look at people's eyes. So, if you wanna, and if I see something, look at it. If you want people not looking at it, don't look at it. It appears very obvious, but it's not so obvious after all. Then, you need to take in mind what is called the bubble of attention. So, quick question. What is more worrisome? Somebody in your back or somebody in front of you? In the back, yes. You can talk out loud, by the way, I see you, but just in case. So, yes. So, people in your back are more worrisome than people in front of you. So, obviously, if you wanna not be a threat, you should be in front of them. Now, what's more worrisome? Somebody in front of you or somebody next to you? Next to you is less worrisome than in front in theory. And what's worse, somebody going toward you or you going toward them? Well, somebody, if I am going toward someone, it's less threatening that if that person is going towards me. So, when you take that in mind, then if you need to clone a card or pick a pocket, you are much better if you understand that model to be in the right place. My favorite spot are escalators, because when you're at an escalator in a non-active position like this, cloning a card, you're not a threat. You're on the side. You're not moving toward them. They are moving toward you. And the fun part about an escalator is they can't go back. So, it's pretty nice. Elevators work as well, doorways, but we've all seen the TV series where there's a cloner that hides behind a guy like this trying to clone a card and everybody finds it's really weird. Try to avoid these things. When you know how people perceive what's around them, by knowing this, just take care of know the bubble of attention and know where to, when to attack. Yeah, so yes, I worked super hard to find this Sun-Soo quote, because it's obvious that all cybersecurity conferences should have a Sun-Soo quote. And this one is the hand-concealing the object should be dead to you. So really, if you have something important in your hand, like you have a rubber ducky, and real thing, I got into a high-security place where they patted you down and I had a rubber ducky in my hand. And they did pat me down. I was like, sure, pat me down. And it felt like, I felt like such a spy, but in the end, I just stopped looking at it and they didn't care. Oh, by the way, if you're breaking into high-security place, they're gonna look for cell phones and cameras. You know, you're allowed to have decoys, right? And they never think about it. You know, they're gonna say, you have a laptop say yes. You're allowed to have two laptops. They don't know about it. And decoys work very, very, very well, by the way. And if you have cheap laptops, because oftentimes they're gonna expect you to come back to get it. So like find the most broken super-dated laptop, I'm a marketplace guy, and like leave that laptop and they'll know you will come back to get it because it's an expensive item and then you're free to roam. So when you're breaking into things, think about having decoys. Anyway, that's a side note. Active versus a relaxed state. I just wanna explain again that if it's important to you, you should be active and it's not important to you, you should be passive and use this, use the attention model to exactly know when to attack. Actually, you should be the opposite. You're doing something at that point. Well, then you should be in a relaxed state of mind. You should be in a relaxed body because of course, if you're like this, well, people are gonna feel stressed. And it's more about position, words, tone of voice and all these things. There's this amazing book called Psychological Soloties by Benacek who is, you can read all about this or the leading with your mind by Gary Curitz that are. Like it's only this. As a side note, Gary Curitz was a dancer for being a magician. So he's an expert at the body and he has, it's like a 300 pages book about all the sororities of position and how to use your hand to look, convey some things. It's really nice. Stooges and accomplices. So spoiler, in magic, sometimes we have things called stooges, accomplices. And one of the accomplices is somebody who's with your team and whoever you're trying to con doesn't know. The first thing you should, in magic, they always do is say, we've never met, we've never seen one another. Of course, the accomplice is expected to say no, but the thing is, why are they all saying we've never met where we didn't know one another? Well, cause two people build trust. So if you're able to have an accomplice, then it's amazing, but accomplices only work as long as it's not obvious you know one another. So if you say, hey, how are you, a nice meeting you, then suddenly you're not two team of one, but one team of two, and then you can't leverage accomplices. Now, in what kind of pen testing you use accomplices, right? Cause of course, if your client lets you in, it's easy. But say you're able to get in once using one person, then you can privilege a skeleton by suddenly becoming an internal employee and then opening the door for somebody else and saying, hey, you're a new person, by all means register. So now you're able to bring your own team inside the premise because you're now an insider and you're enforcing the rules of having people sign in and since you're there, you can even request for a badge. So by being an accomplice, it only works if it's obvious that you're not working with the person. So that's called an accomplice, it's very used in magic. But there's something much more cool and magic that is not well known called stuges and stuges are accomplices who don't know they are accomplices. And this is really nice. And let me give you an example and I'll talk about the next few slides. I came into a pen test and say I'm here for the audit. Now, the first person expected me to be an auditor. So that person brought me to the auditors and now that person said he's here, he wanted to tell that I was here for the audit. But the auditors really knows that I'm not, that they don't have an ongoing audit, right? So I'm kind of stuck. What do you do in these cases? There is this amazing magic principle that you can use called double talk. And what double talk is, is when you're saying something and both sides understand something different. When I'm saying I'm here for the audit, the person at the reception hears, I'm here as an auditor. Whereas the auditor believes that I'm here to be audited. I'm using the same words. I'm saying I'm here for the audit, but both sides of the people hearing hears something different or they understand something different. Does that make sense? And it's an amazingly powerful tool. I'm giving you a very, very simple example about audits. But when you can have somebody on the inside understand something and that person can now be your messenger about your throat to somebody else, then you're leveraging their trust in order to get in. So that's something super useful and it's called double talk. Now, unless you're exceedingly witty, it's very rare that we see people being able to do double talk like this. It requires some practice, but it's a very, very nice tool in that regard. Now, I have like four more minutes and I wanna talk to you about surveillance because recently I've been privy to people doing surveillance or tailing people and I wanna give you a very, very good example. What is wrong with this picture? No one? Well, first, yes, people's doing this. Now, if you ever have to do surveillance, please know that you don't need to do this. In movies, they do because they want them to see it, but you don't need to do this. Oftentimes, you can just talk and it'll work. So, and it's very obvious. When you see people being trained at surveillance, they will all have the reflex of doing this. There's another problem with this picture, what is it? Yes, exactly, there's this little, but earpiece. You don't need an earpiece. Now, the earpiece is so small, it goes inside the ear and there's an induction loop in your collar, so there's no wire, it's just like radio. But more than this, again, in movies, they use cell phone and so on, but you can't always use your cell phone. If you need to talk to someone in surveillance, there's something called a press cell or a PCT switch, press to touch. And the way it worked roughly is, since you can't talk, you need to have an operator asking questions. So oftentimes, three beeps will ask me a question. So I have the press cell in my hand, it's like a little beige-shaped button, basically. And one day, I'm gonna press three times to say, ask me a question. And the operator will say, is the target moving? I'll press one for yes. Is the target going west? I'm gonna press twice for no. Then east twice for no. Then north, yes, and all these things. So it's not like in the movies and people need to be super worried about surveillance. The way they're doing it is really bad and you're gonna see pros doing something like this. But it still is about magic. If you care about it and if you look at it, people will look at it. So now, if ever you're lucky enough to do surveillance, it's really boring work, by the way. Just relax and don't touch your earpiece and behave like normal. There's lots of things I'm out of time so I'm not gonna talk about clothing techniques and counter-surveillance. Talk to me around a beer if you'd like. But it's something you shouldn't need to know. In the movies it's totally wrong. Yeah, I said this. And references, finally. The CIA manual to deception. This is something the CIA wrote. Well, with magicians living with your mind, I talked about it before. Psychological subtleties. A sleight of mind, it's amazing. It's like neuroscientists looking at deception and the book influenced by Robert B. Chialdini who was a psychologist who wrote about how influence works. It's a really, really good book. Thank you so much for your time. You've been great. Have a nice day. Thank you, Laurent. It was awesome. So last call for the questions on Slido if you have time. And after that we'll be doing the Q&A. It was nothing to them, but it's a little bit of a walk. Oh, welcome back to the Q&A of the Red Team Block. I hope you enjoyed the afternoon. I personally really appreciate it. So let's start with a round of applause for our speakers. So by the way, I have had a big one question on Slido so it's going to be all my personal questions that I have received no feedback from you this week. But anyway, it's going to be... That's not true. Okay. A few feedback. And we'll start with the easy ones. So let's just start with a basic question. We had a talk by Guillaume on guardrails. I really appreciate this talk. I really appreciate guardrails. I mean, it's a very interesting topic when you develop malware. What do you see as a challenge in 2023 for EDR evasion from the guardrail point of view? Well, anyway, whatever the point of view. From my perspective and my knowledge, in 2023, guardrails are not really detected as a behavior. So I don't know any EDR that see a process looking at the user, the domain variable of the computer and say, hey, this is a guardrail. Obviously, let's block the process. So I don't think it's a problem right now, but it might be in the future. Okay. Because I saw a trend between the talks of Guillaume and Shao. You two modify your code, modify your malware, and it really becomes like a bit weird. And I have had some challenge evading EDR by just adding piece of layers, things like that. So same question for you, Shao. Do you see your techniques not work in one or two years? Well, I guess it always depends how you implement your stuff, right? I always try to be as generic as possible, but I'm a bit of an idiot. So I shared those tricks online with a bunch of people. So they're not really a secret anymore. However, I guess it's always about creativity, right? This is something that I had it to most of my talk. You just have to be creative. Sometimes you just go in the shower and you think about some weird things and you come up with some ideas, right? At the end of the day, EDR cannot track everything. It's like hooking or carnal callback or all of that. At the end of the day, they cannot hook everything. They cannot have callback everywhere because it's just going to make your system unusable. So you always have to play the catch-and-mouse game with them. And at the end of the day, it's part of the job, right? Being creative and always coming up with new solutions. So they're going to improve. We're going to improve. I made a talk six, seven months ago at Akfest about machine learning. For those of you that attended that talk, I was always air quoting machine learning because I don't know anything about machine learning. But I know for a fact that they're definitely not using machine learning based on what I presented. Typical super simple evasion techniques still work. And they should be able to detect that with machine learning. So we're definitely not there yet, I guess, from my point of view, which has no value. If I may add, I talked yesterday with some detection engineers. And they talked to me about one indicator that I see will be a challenge in the future is saying this binary is only present on two of your hosts. We've only encountered this on two hosts. Or we've only, it's never been seen before on the internet, except there. That type of intel is going to be a real challenge. Much more, you know, obfuscation at the EDR level is there, I think, due to the risk of false positive and so on, this is going to be, there's plenty of years and years and years and these people are at the forefront of it. But when we're attacking human analysts, these type of metrics saying in your 10,000 a host, this binary is only present twice, that's something you should look at. And this is something that I'm gonna, in the future will be much more difficult to evade. And if I can add to that, that's why nowadays, DLL side loading is so popular because if you have a legitimate process, have a DLL being side loaded, and if you do an error afterwards, the alert will come from the assigned DLL which is found on like thousands of system world routes. So the first investigation step will see a legit binary. And if I may add regarding this, it's super cool. They can technically detect side loading if the DLL loader is not signed. However, if you look at that net, you have the concept of app domain which allow you to actually load a DLL without actually side loading it and doesn't have to be signed. So for every problem, there's a solution, literally. But yeah, side loading is definitely cool. This is something that I've done in phishing. You use an actual legitimate Microsoft signed binary that supports some side loading so you just ship the executable that is signed and your DLL so you bypass mark of the web, smart screen and all of that and you get your code execution through that actual binary. So it's once again just about being creative. But thanks for the answers. I'll talk now about deception. I really like your talk, Laurent, about tips and tricks and the link between magic and deception. I guess everyone know that deception is a real topic in our field. You can put fake systems to detect red teamers, for example, and of course, real threats. Have you think about this topic? Do you think magic can be used to... Is there a way to detect threats with deception? Oh yes, absolutely. So I am also doing purple teaming and more nowadays now incident response and we do exactly these type of things. Of course, there's all the common techniques, have a password in GPP, a honey token, inject creds and RAM that don't work but look at it, look at files. There's all these things that are fairly common but there are ways to cheat. There are ways to have files with beacons. And now if you're using canary and all these things are well known but as you know most doc files are in XML file inside and if you have an external DTD it can call you. Lots of things you can do to hint people about doing these things and these things. So yes, deception works for, it's a tool that can be used both for attack and defense but there's been such a massive improvement the past few years in honey tokens and these steps. It really changed the layout of the way we do red teaming. Myself, I tried first, I'm not technical so I oftentimes don't rely on these cool- Amateur, right? Exactly, yes. So I don't rely on these advanced techniques. I mostly go to people and do stupid things like asking for the files or ask, you know and there's no EDR for humans, right? So yes. Well just maybe a take on deception but from an offensive perspective. For those of you that are familiar with Albin you may have probably heard of MS-Bill which allow you to run an XML file where you have embedded C-sharp, right? So Microsoft response was to actually start to use MSI within that file to actually detect a list, right? And one day was like, hey, this is XML, right? Why not just using entity to obfuscate the whole thing? So now what I use is XML entities to actually have my payload encoded using HTML entities but nobody looks for that because right now MSI is only gonna look at the C-sharp part of this. So it's about deceiving the people that are chasing you, right? So put your code somewhere where it's not popular or where they're not looking at and that way you're always gonna come up with new technique. I've been using this for probably like a year and a half and Microsoft doesn't care. But it's a technique you've been using for a while now. I mean, people who are on your Discord know about this for a while. Because I share all my little secret with the world. One similar technique that I share and some people know about it is oftentimes you can use your companies, your target's website as a CNC. So think about, for example, a website that has a review field. Then you can craft malware that's gonna read in the review the commands and then push the output as an answer that review the item. So when the SOC is looking at the C2, they're not looking at some weird website with an unknown domain or with something else. What they're looking at is your internal computer visiting your main website. There's some beginning and so on but leveraging trust and deceiving these... So you're not fooling systems, you're fooling the analysts looking at a system saying, oh, this is fine. And that's, again, where deception is useful. Yeah, and all that. No, thanks for Google to create the .zip TLD. That was a terrible idea. I think like all the domain that we're registering for far are definitely malicious. Like, none of them are legitimate. That was the dumbest idea ever. I agree. I think it's an amazing idea. There should be more. Like, you know, I'm out of proponent for PHP 5. I think we should, like, industry should use more PHP 5. Definitely. And like, you know, I think it's a wonderful idea. As a red teamer, I'm all for .zip. Yeah, definitely, but I don't even understand what was the point of doing this. Like legitimately. According to the blog, they said that .zip is more secure. Right. Let's change the topic. We were starting to receive questions. Thanks, guys. What's the most important difference between an OK red team and an awesome red team? The report. That's true. At the end of the day, you said what's the difference between an OK red team and an awesome red team? And I believe that the report really does matter. You can be the best hacker in the world if you cannot provide a decent report. It has no value, right? And we tend to overestimate the fact that we're highly technical and the people that are gonna read the report have no clue what we're doing for a living, right? So you can have this super crazy chain of exploit that lead to whatever at the end of the day. They don't care. They don't know what a certificate server is and all of that, so it really depends on how you actually provide the result to the customer, right? I've seen cases where they didn't find that much, but it was well written and people still see the value in there, right? So I guess from my perspective, it's really whatever you deliver to the client. For me, a good red team gets caught. By that, I mean once you've achieved your objective, as a red teamer, you should increase the noise level in a way to get caught, because you wanna give feedback on did they catch you and then did they take the right decision? Oftentimes, we have people saying, oh yeah, it's just malware, AV clean, thanks done. But malware aren't like, malware are not like shrooms. They don't grow like, you know, if there's malware on your host, somebody put it there and you should investigate. And so oftentimes, I think great red team make the blue team work. Definitely. And I think a lot of red teamers are scared to be caught, but at the end of the day, as you pointed out, is because someone did something great. At the end of the day, we're not invisible to EDRs and stuff like that. It's just that I think EDRs are not ready yet in the sense that everything is an event, but nobody looks at events, right? They should find a way to make them more useful and easier to query or whatever, but it's impossible to run something without existing. You're gonna leave a trace at some point and they just need to get better in that sense, but I think they're just not interested yet. And Guillaume, do you have an opinion on that? Yeah, so another important aspect is to not spend like half of the project on the external part, because if you don't succeed, you don't provide any value to the client, so it's important to have a clear, like check mark when you switch face to a DSM breach or a DTECH and notify face. Awesome answers, I really agree with what you said. I will surf on a few ingredients here. Let's talk about prioritization. So you set up a red team, you prepare your chain, you write your malware, you write awesome stuff, you take all the awesome tricks. How do you make sure that your attack chain, your attack scenario, will provide most value to your customers? I guess, well first you need to make sure that it's not detected immediately because else you won't provide any value if the client's higher red team and you don't modify anything, you send your bell, it's detected, the client gets no value. So my point of view is that try to recon, try to find what's easy, they're SEM, they're EDR, they're the configuration, mimic it in the internal environment and when you're 100% sure that it's bypassing that, start your campaign and the value will come after that. Sometimes we just call it the white cell. We're saying a white cell, we're gonna run this malware, can you run it just to make sure it's fine? And so we know for sure at that point that it's all right. I mean, is it cheating? Well yes, but that's the best way to provide value. So we just, sometimes we just straight up call the client, say you want a red team, cool, make sure this runs on your system, like run this, it's totally fine, we swear, pinky swear. And that's it. And I guess client also need to understand that we're limited in time and budget, we are limited. So sometimes, yes, of course, if you give me six months, I'll come up with something more exotic. But at the end of the day, it's important that they understand that. Also if you have an accomplice, make sure that they understand what they have to do. Looking at you, Martin. But yeah, so it's important that everybody's in the loop actually truly understand what's gonna happen. How many time I had to actually educate client on what an actual red team is, right? It's a buzzword that is used on every recipe at the end of the day. It's more than that, right? As I kind of briefly touched on my talk, it's always about also detecting your capability of detection and how you can actually find a path to crown jewels, right? It's not about, here's a list of thousands of room without any context. So they need to understand that we're not gonna give them like a result, like a necessary scan kind of thing, right? And there's a lot of, you know, company businesses and vendors. Like the new thing that I see often is automated red team. I still don't really understand what it is because if you have them, it's like a scan but they actually exploit it. So it's an internal test, but from my perspective. But at the end of the day, it's an industry that generate probably billions of dollars so everybody's trying to get a piece of the pie. But I'm gonna fight hard to make sure that people understand what is my vision of a red team and fun fact. I was working at Trustwave years ago and they're based out of the UK mostly. And a red team in UK is really different than what we consider red team, like what I call now American red team. American red team is what we're used to. UK is more about the modern attack framework and cases, specific cases. They don't, you know, tell the story like we do in America. So it was really different, but for them that's the reality of what a red team is. So you also have to adapt with your actual crowd. I guess they use the Tiber EU framework. Yeah, all kind of stuff. But that's just the reality and they also have like Crest, that push your agenda and all of that. So, you know, everybody have different standards. Let's start from that. Do you think they have more maturity than North America? It's different. It's just, you know, validating and control is good, but sometimes you may miss the whole picture, right? Typical, oh yeah, patch all my system, but what about your certificate or the red server? Did you actually make sure that your server probably is configured? Does your template actually well secured? So it can be up to date, fully patched, but if you have terrible template, it's not gonna do anything. Sorry. Yeah, I agree. We have people doing your wave in front, so. Awesome. Don't worry about it. Yeah, I'm not sure what's going on with Mr. Marcel. He actually flew from like super far just to be there and see Laurent. So you deserve a round of applause. Yeah. Jonathan. Oh, shit. Thank you. Next one. What about you, Martin? What do you think about all of this? All of this? Yeah, thanks. Well, first of all, for the crowd. So I run an internet red team, so it's completely different. We have our own team. The dynamic is different. The objective is different. And my... I mean, that's the base of the question about prioritization, because I ask this question every day, because once we use tradecraft, we need to rebuild new ones, because it's always... Brûlée. Burned. Well, I guess that's the question for you at this point. As an internet red teamer, since you're already the same guy that do the same kind of stuff over and over, you have to adapt, right? Because, like, as I mentioned earlier, I don't know if you attend my talk, but the signature of your compiler and stuff like that, at some point, they may actually realize, oh, yeah, we know it's Martin again. He's part of the red team. So as a consultant that just do my thing and leave, it's slightly different than you, right? So how do you actually adapt to that kind of... They know we're coming. You also have insider knowledge kind of thing, so... Oh, it's quite a challenge, yes. But today I will say that we have a very, very good collaboration with the blue team, and I'm a big purple fan. Most of our activities are purple base, and sometimes we do what we call adversary simulation just to do a little surprise and have them work at night a few weeks a year. But no, I mean, it needs to be collaborative. And why do you build a red team in the company? Well, to train, to create opportunities, to improve, to challenge assumptions. So it's all blue driven. So that's how I would answer your question. Fair enough. Let's talk about AI. Do you believe that EDR and anti-virus solution will leverage AI to the point that the current state of the obfuscation and evasion will not be enough to bypass them? So the people who are pushing AI are the same people who pushed the metaverse last year. NFT is the years before. And like, so is AI a thing? Well, yeah, it's cool. You can make pretty cat pictures and you can chat and even sometimes get the right answer. So is that something that's gonna be leveraged? Well, yeah, perhaps, but I think is the metaverse still a thing? Well, not really. Do we care about NFT? Ah, not really. So I mean, next year we're gonna talk about AI like this really cool concept that happened once. But don't worry, there was some point where PKI was a solution in 2000. Then 2002 was a year the WAF and all these things. So next Gen AV was 2006. So every year we come with a new concept to buy stuff. So it's gonna be great at studying. Like lots of people are gonna sell lots of cool product and they're gonna make lots of money. Many industries are gonna have a new box with AI powered on it. And it's gonna be as useful as the previous one. And yeah, that's it. If I can add to that, I think AI will definitely be an accelerator. If you were at my presentation, it was true like a king-proof concept. It took me like several weeks to understand what was going on. And when Chatchapitiq came out, I just provided the code and it was perfect. It was explaining to me what was going on. So I don't think it will break our payload immediately, but it can help Blue Team investigate faster. I'm a bit on the side on this one. I guess it's gonna help me write fishing campaign without having typos and grammatical error. I'm not gonna have to reach out to my bus to QA my fishing campaign. But apart from that, I think there's a misconception about AI about the fact that it has learning capability. It can only learn what it knows about to a certain extent. So if you have Chatchapitiq to come out with a new attack vector that was never seen before, good luck, right? Probably have some value. It's just, I guess, I don't know how to use computers, so I don't really know how to log in to Chatchapitiq, so I'm probably gonna have to write my code myself. And it's all about the data set. Yes. I mean, the internet is dumb. Like, look at it. There's anti-fax things. There's like lots of cats, a picture of cats, but think about it. Like OpenAI is based on the internet we all know and use. So the Facebook posts of your grandma, it's in there and that's what intelligence is. So think about this next time you're using Chatchapitiq. Thank you, Lera. And I think as far as I know, Chatchapitiq is basically a prompt and a UI on top of OpenAI, so still not sure how this was acquired by Microsoft and not OpenAI, but I don't know much about this. I don't know. It's probably written in Ruby or something like that. Or Pearl, maybe Pearl. Any Pearl Dev in there? Olivier is not here today, so no. Let's go for Rust. Rust, yeah, speaking of that, for those of you that love Windows Kernel, latest beta have a part that was written in Rust. That's gonna be really interesting. You're gonna have to relearn everything you know about Windows. That's gonna be interesting from a Red Team perspective, Evasion perspective. Really looking forward to it. You can already take a peek at it if you want. It's quite nice. There's some drivers that are already in there written in Rust. We have only seven minutes left, sorry. So let's, I mean, would you like to share some failed stories, Red Team stories? My life is a failure? No, no, no, no, no, don't say that. Yeah, some story, something interesting that you'd like to share. I can go ahead if you want. So my fastest physical intrusion to impact was like five minutes, because from the outside, I delegated, followed someone upstairs, went into a conference room, plugged the internet cable from the VivoIP phone, looked what was my DNS server, and the DNS server was vulnerable to MS-17-010. Nice. Survived five minutes from outside to domain admin. That was pretty fast. It was in 2017. That's actually really fast. Our record is nine minutes. So five is really blazingly fast. One fail I had was I was calling tech support, saying it was someone, and the guy answered, that's me. And I don't know, I'll be honest, I don't know what you can recover from that. I'm like, oh, sorry, wrong number, and you hang up. Like, honestly, if ever you have, you know what to answer from this, let me know. But right, that was my, my, yeah. Well, maybe a funny one to a certain extent. So I was working with this client, and we had an accomplice that was told to actually click on the phishing. That was part of the scenario. They know that this person will get phished, right? And for some reason, the person had all the information, the subject of the email and all of that, and they still reported the email. So I was like, well, that went well. And funny enough, it was reviewed by the security team, and it was actually marked as safe. So I was like, nah, that ended up being pretty decent. That was a funny one I found. Okay, okay. Something else? I'm just checking my awesome questions. I'm sure you have another one, Lava. Oh, well, sadly, there's plenty of fails. It's more a pentas and a red team, but we had a client who made a typo in their IP range, and they had the same, like the company, the other company had the same mission. Like they were building widgets, and the other one was called a widget factory year, something like this. And they had the Grafanae point. We pivoted, we hacked their mainframe, dumped all the credit cards, and the client was copied every step, saying, that's fine, that's great, continue. And at the end, once we sent the log of like, we had a credit card and so on, it's like, oh, but that's not us. And then you were like, mm, what should we do? And we even called the other clients, the A clients, we apologize. We mistakenly hacked your mainframe from the outside. I apologize. You get a free pentas, congrats, here's a report. And the client was like, no, I don't want to talk to vendor. But yeah, but we went out. I don't want to talk to vendor. Don't call me again. And so as far as I know, the client is still vulnerable to this day because we've never been able to reach them right now. Let's end with a philosophical one. What do you wish for the cybersecurity industry? What do you think is their next best move from your perspective as simulating adversaries every day? We talked about it earlier, but only tokens, only trades. I find them very effective. When I do a red teaming, I'm becoming automatically paranoid if I find easy credentials on the system. I just don't want to look at it. So I think it's really strong to use only stuff. For me, browsers, the new LSAS. Everything is in the cloud. You can have 20 MFA factor, I don't care. I'm just going to steal your cookie. You don't have to touch LSAS. My answer to your boring, but for me, one thing we're seeing more and more is involvement of management and red teaming. So suddenly it's not just get domain admin, but test our processes and do some actual business risk. In the past, we've stolen a building. If I may say that we've found the building certificate of the ownership of, sent to the city of Montreal and we took ownership of a building that was a pretty big business risk and not IT related. So I think more and more, and you've said it, Martin, more and more as management are involved, red teaming are much more tailored to the business need of the client and they're much more what the business needs. And it's not just about getting domain admin. And this is something I'm really, really happy. And internal red teaming is something we're seeing more and more, and I think brings a lot to this. Thank you, Laura. I think we'll, ooh, I was just about to say past the hash is dead, long live red teaming. And past the hash is not dead. No, really not. Not yet. All right, well, thanks guys. It was amazing, amazing talk this afternoon. Thank you very much. Let's have the last round of applause. We'll take a 15 minutes break and after that there is another talk. Thank you. All right, so the evening is not over. So we have another talk by Ashley Manraj. He's a CTO at Pivotel Technologies where we spearheads technology and development methodology around event-driven asynchronous, go GRPC microservices in the back end. It didn't make that easy for me. So in the front end, they are developing with Flutter across a platform using the block pattern to interact with our back ends in GRPC and GRPC web. You'll hear a lot about GRPC in the next 30 minutes. And I have to say, I know Ashley since university, so it's a great pleasure to see you on the stage right now. All right, so thanks. So I'm representing today Pivotel Technologies with a subject GRPC with less effort. The less effort comes after having done the transformation to be even sourced and having done the work in order to simplify it. But if you don't, we're going to go through the steps that get us there. So first, I was doing applicative pen testing. I was doing security review for banks before. Now I'm more in the building and trying to build more securely things. And we're doing a bit of branding, we call it infinite enterprise, but it's basically the way to have pure containers everywhere, the deployment methodology to secure everything from small to medium-sized companies and do digital transformation. And so we try to do secure patterns, develop libraries for infrastructure as good for your cloud infrastructure, for some of your on-premise, but right now it's very focused on GCP. And we try to simplify maintenance of microservices and maintain huge fleets of microservices for our clients, but also try to develop them faster or transform those groups faster. So you can see on top our website is Pivotel Tech and our GitHub is Pivotel Dash Tech. Some of the technologies we use very quickly, so because it was mentioned before, we only use Flutter or mostly Flutter for front-end for all platforms, and we've been using web for almost as long as it's been available almost three years. We try to maintain our own CI because our CI and CD are very custom. So we use mostly tecton and some GitHub actions when you need root-level privileges when you run your CI, bit-rise for mobile, otherwise everything is in bash for that part. Continuous deployment is Google Pub Sub in order to deploy in network-restricted environments using Argo, the Argo stack before, and now we have our custom deployer that builds on top of Argo. For the deployment, heavy users of Terraform Cloud and Terraform users of Vault and Ashicorp tools, mainly Kubernetes, sometimes Google Cloud Run, and the technology is now is mostly Istio for service mesh, Ingress Gateway, Ingress Gateway, and the Ori stack for authentication. And the last, but not least, even store, Maesti for even sourcing. If you don't know what even sourcing is, we're gonna talk about it a bit later. And those libraries do the heavy lifting for us to transform, because if we don't have libraries, you can't actually go faster. So the OAS top 10 for 2023 is mostly about broken object level authorization, which is an authorization problem. I'm gonna define it a bit later. Broken authentication, so it means knowing who your users are, which differentiates authentication from authorization. Offerization is also the topic but at the object level property at number three. Unrestricted resource consumption, mostly does indeed, it's not gonna be talked by me in this talk. And the fifth subject as well will be addressed by the infrastructure automation during the talk. So it's one, two, three and five of the top 10. The objective is to show how we secure directly from photo buff and ensure that infrastructure is secure by default or non-functioning. So we try to have either functioning services and secure or non-functioning at all. So that's what we try to do because otherwise if you have to review your code every time you deliver hundreds of endpoints, you wouldn't deliver fast. So now it's kind of bare with me. It's the template for some of our large-scale transformations. So as you can see here, there's a white label project. It can be one or more Kubernetes projects but mostly it's like the front end but also front end and customization of the endpoints. So it gets into a Kubernetes cluster with an Easter ingress gateway. It goes to an authentication service here represented by Oreo of Keeper for the white label. And then it talks in GRPC web through a translation layer of Easter ingress to a proxy and through that proxy it goes to an internal gateway and from the internal gateway to a core. Why do we do it like that? It's because we've automated authorization and because you have automated authorization it's simpler to have less endpoints but more closely maintain and manage them for your clients. Because if you don't have that automated authorization and you leave developers to do it, it's more secure to develop one set of endpoints per white label and that's what we've seen a lot of our competitors and others do when they don't have their built-in authorization but once you do have a core for which you can control authorization very tightly and do just the customization needed for some of the white labels, it's more efficient to have many white labels in front and then have a gigantic distributed microservice core for the core and the core retakes the same logic as mentioned before except we're using a GRPC interceptor directly at the microservice level and it's templated in order to be always there or the service wouldn't be deployed and it goes to an authorization service that does rule expansion for us and then finally to an authorization engine behind. The roots as I'm mentioning in the text here you're much better off defining completely new rules for your HTTP route whether it's cloud or on-premise the reason being there's a huge new attack vector mentioned by James Kettle from the group which is HTTP request smuggling or HTTP to downgrade so if you're on-premise don't share your load balancers for HTTP one workloads and HTTP two workloads you find completely new endpoints for it because to my knowledge it becomes a lot harder at your different load balancers to be managing correctly you're at heavy risk of either smuggling or HTTP to downgrade or other more complex problem related to caching so we recommend to really separate those two types of routes whether on cloud or on-premise not to share any HTTP to HTTP free and the last thing from experience when you're using those new protocol get ready to patch everything in your critical path which means your cloud configuration has to be regularly updated whether you're using Google cloud load balancer or AWS cloud load balancer or other large cloud providers in order to do your DDoS protection your Kubernetes cluster, your Istio ingress keep them at least unstable, your ORI off keeper everything that is in the critical path including the proxy has to be regularly updated because if you have a vulnerability the chain is so long but it has to be able to be patched quickly so if you get into those new things IEC is mandatory and doing things fast is also really necessary so now let's get into one simple service why do I say that it's simple because there's only one endpoint and the endpoint is very simple, it's a simple read the logic here why we're using proto and talking about proto for security is not as you can see here not only about speed because people talk about proto a lot for speed but it's about the IDL and the fact that you can template directly from the code CRDs for the deployment of your security so that your security is automated directly from the proto level your proto's are compiled and then are deployed through your CD at the related to the different configurations that we see here here for authentication and under for authorization and the first one being for the white label so when we go back to this schema you have the offkeeper config here for the white label and a second offkeeper config here for the core and you can see here the config for the white label as you can see here it's white label player with authenticators player GWT and the name of the endpoint is get user wallet and get user wallet response but at the core level you can see that it transforms into get player wallet response and get player request so on that proxy you can transform the calls and modify them a bit without the white label knowing that they're actually using another interface so that's one of the big advantages as well of having two different sets of configuration and security you have like basically two levels of control and as well the aspect that we really want to talk about here it's a simple wallet with a request ID and a player ID and authorization if I haven't mentioned it before it's knowing what the person should have access to or who it is the requester being the person requesting for the asset and the player ID is who is requesting it for so knowing if a requester is allowed to request for the player ID is what we solve for the authorization so our custom option here that you can see is made for authentication purposes to validate the jot so here it's X1 platform for the core and the other one that we saw previously is for the white label and the authorization is evaluated for now for us on the core it can also be evaluated on the user I mean on the white label end but the validations didn't completely make sense because we're doing absolutely the same checks twice so we ended up doing more on the core and add another level of platform check on the core and so you can see here our custom structure with a player and the player ID and using a permission as player viewer on the requester uh... sorry I inverted the requester ID uh... is a player viewer on the protofill player ID and so from this uh... from this authorization check uh... we make sure directly by reading our RPC that there's a validation of player viewer needed in order to access that wallet uh... on the core in order to summarize the big objective here is to delegate a lot of security to be templated directly from the protos use custom annotations to drive at each layer of interfaces as soon as we patch or upgrade any interface uh... the security is deducted automatically at different levels of uh... deployment and you don't have to maintain as much once it is built the problem again being you have to build it or find someone who helps you uh... get through that transition of supporting the templates at the different levels uh... and so we're gonna get now into uh... the templated value that is generated I won't bore you guys too much but for the white label it has again the required scope uh... the audiences uh... issuers but uh... more interestingly the url is automatically templated and generated directly from the proto so as a user uh... or a developer you generate your proto and all the security is deployed for you but your authorization is not your authorization requires a few other things that will be added by infrastructure later on in order to make it available uh... that's for the white label and we're gonna get rapidly into the core as you can see it's very similar it's also templated the infrastructure generates the the urls for for it we're using hydro from ory in order to manage it and we're using the different scopes and again uh... the developer doesn't have to type any of that most of that is generated directly from the uh... the the options here plus deployment manifests in order to enrich the authenticators name here platform gwt and here player gwt so the templates knows to look for those two fields in order to enrich it for automatic deployment so how do we enforce at the final micro service that the interceptor will always be there otherwise what's the point of doing all this if it wasn't automatic so we're using uh... templated and cogeneration in order to enforce that all developers are following our code base and our methodology so it goes from here core interceptor to databases to external services to mounted services to share secrets in order to patch hundreds now hundreds or thousands of microsoft microservices directly by patching a code gen so uh... we can see here that the unary server interceptor is present we have a value to remove the authorization if need be uh... so it's a there's a bypass that is in the code but it's not available in conflict just available for the developers in their local kubernetes for the authorization while they're working because if they don't have the entire flow uh... they can't work uh... if we don't add uh... the authorization engines that come later on uh... when i'm going to talk about how we actually add those authorizations so that's just to show that you can have an enforcement that in infrastructure level in the microservices by controlling tightly the code of those microservices before they get to the authorization and here we have the config that goes along the service that we saw earlier with the player, player viewer uh... this is uh... a syntax from google from the white paper from twenty thousand uh... two thousand and eighteen uh... called Zanzibar uh... that spec uh... was described in the white paper but the code is not open source so we implemented the entire spec in twenty twenty uh... as you can see here uh... the description is basically when you read it it's made to expand upon the list and expand the user sets in order to understand what's happening so i'm gonna go i'm gonna try to go slowly with you guys so you understand but the objective is at the end of the day to have uh... something that's a bit more easy to read for the infrastructure specialists because as a developer you only see the tags that are here so player viewer player editor or player manager and in between is the deduction of rules so we're gonna go uh... through the simple one so the uh... the player viewer is either directly a player viewer or is somebody computed from a player editor so player editor is also a player viewer so if you give that property to someone else you automatically get assets to player viewers and now a player editor is either directly a player editor through the use case of the of the word this this means directly that uh... the term player editor or from uh... the computed user set from player owner so if you have a property player owner you're automatically a player editor and automatically a player viewer and so if you start to build more complex engines let's say with free front ends and hundreds of rules uh... for us it's in the order of fifty let's say fifty or seventy key words on larger platforms uh... it's way easier to read those terms understand what those relations mean and test them rather than try to build or understand the larger authorization that's coming play uh... i've just put one example of them what a more interesting use case so a player manager is related to a couple sets of uh... uh... in the namespace of master and has a relationship player related to uh... so it's a master player relationship that makes it a player manager so basically a master that has the relationship player related to uh... that that player he he's he's that players manager and the computed user sets uh... also applies to master owner in order to uh... the masters owner will also be players manager uh... and the big advantage is for that infrastructure developer because we don't want every developer to be doing this we want only two or three of your developers doing this that like doing and working with couples but all the other developers benefit from it afterwards and so here you have your developer specialist that understands the couples that defines the rules to be inserted player grad as player owner grad master of the white label with a relationship player with player grad and the platform with the term white label uh... with the term active on player grad so those are more complex rules that were added later and this is just a subset and you want you can check directly at compilation time uh... of the authorization library that player grad and grad is a player viewer of the of himself and uh... you can check for uh... you can assert that other things are false as well in order to make sure that your validation makes sense but that's kind of a big advantage of the methodology you have some people specializing in authorization and they manage it for everyone else and you don't have to manually re-verify a lot of things it's handled by the infrastructure for the developers and uh... that's an example of couple testing uh... dedicated for those that that do this and here we have uh... an example of the entire flow uh... with the trace that's happening from the uh... the large-scale infrastructure that we showed before that goes down to the the secondary cluster so the first query comes and it takes 40 milliseconds it spends a bit of time in off-keep the first off-keeper that we showed earlier and then uh... there's evaluation at the off-keeper level in terms of three milliseconds it gets to the get-user wallet which was the first call that we saw earlier uh... at the proxy level and from the proxy it gets to the secondary get player if you remember earlier we showed that there's a decrypancy and a change at that layer of the proxy and from the proxy goes to the internal load balancer and the secondary cluster so we get again into a cluster and then there's another uh... the secondary off-keeper it's a bit faster at evaluation and then it gets to the final get-player wallet within the micro-service internal micro-service uh... that's the first part of the trace the second part of the trace is uh... uh... once we get to the get-player wallet it's down to 23 milliseconds it performs a get uh... the check get-player wallet uh... that's an internal call for Keto and an internal call but as you can see that two checks the reason for having two checks is uh... Keto is checking that the player is indeed a viewer and that there's a relationship a direct relationship compared to the one that is in Keto so is a player viewer related to player owner and the second check is yes and so the chain is valid and uh... the get balance is performed within the arm and uh... user gets his query back and the change propagates all the way back to the front end so it goes back through the entire chain back to to the beginning but uh... one part I didn't talk about is how do you make sure that your authorization are added correctly by your infrastructure related to at what your developers are doing so now we're going to talk rapidly about that part uh... if we we're using even store DB and we've even store DB when monitoring here free streams as an example uh... player created player activated player suspended we're using even sourcing so developers feed those streams and uh... this is a projection in even store DB and it would automatically link it to another projection or another stream called authorization so for the authorization he's live the authorizing stream is listening to hundreds of internal streams in order to feed the authorization uh... as we show here all the uh... the platform users or the developers at the different commands those commands such as player created player activated player suspended get into even store DB in our case but you could use Kafka you could use not just trim uh... mostly even buses uh... the infrastructure listens to all those streams that are relevant to authorization and from that we insert the tuples into uh... the authorization engine and from uh... there it inserted in aloe DB uh... for us that part is just for scalability uh... we're using uh... large-scale databases in order to have high performance there because those checks are performed once on every single call so it has to be performant and so that's what i was talking about about in terms of even sourcing all our applicative services use the uh... streams and all this evens from all the applicative side is pushed into and even sourcing you could technically do it without and only push your authorization as an uh... as an even sourcing part and keep all the rest of the infrastructure uh... normal but we did the effort of doing uh... even sourcing so we did it on everything for us uh... which means that uh... once new events are propagated we're eventually consistent uh... the authorization each means even handler pushes the the rules for those different situations into the uh... into the database for kato and then uh... via the authorization is capable of doing the rule expansion and check the same kato in order to make sure everything is secure so with the advantages here once you put that methodology by the way we're not the only ones doing that a lot of the big organizations are also working with large-scale authorization engine it's the practical way to solve the big issues related to authorization uh... the examples we've just shown are just simple examples uh... it can get a lot more complex and honestly after doing that for a while we wonder how developers manage complex authorization when there are seven types of user groups five types of privileges and delegations in between it's very tough to manage with tuples so uh... i wonder how developers were doing it correctly if they could do it correctly back in the day without even sourcing engines and so this is a just a small example for people to have the visibility on uh... an example of stream such as player created uh... the player created stream here has all the events that are associated is basically for those that have never seen an event store is just a stream that contains all the events related to to that context and so you can do that for any type that you want player uh... and any type of secondary word to qualify what is happening usually it's for new objects if somebody is uh... suspended or activated that's related to the activation if an account is active so that's why you have those those other terms but you can think about dozens or hundreds of use cases do we have a permission to do uh... let's say modify their cart obtain uh... on the account open another account uh... chat with somebody you can do anything with the authorization that's what google is doing behind the scenes for cloud storage and gcp they're using huge and zanzibar engines that do billions of queries per second on all the assets to validate systematically this is the way that we're doing kind of the same thing uh... but we've even sourcing at a smaller grade but the objective is to be secure by default and by design uh... with another method they use tons of other services in order to do that uh... we're just doing it simpler uh... a smaller scale but because we want security and we want speed we ended up doing you having the same methodology so uh... what's what are the key takeaways uh... it's not easy to implement the entire structure uh... a lot of banks haven't done it uh... when you look at who has implemented it mostly there are a lot of groups are experimenting uh... there are i think there are just a few dozen who have completely implemented it in prod the methodology with authorization at scale like this i believe that it's the best way to simplify not only authentication because a lot of you guys is probably using authentication services it's becoming fairly standard the authorization requires uh... a lot well in our opinion a lot better annotations by default as you saw the rule expansion that we've shown before uh... the annotations are heavier you need the structure of protobufs to directly compile and obtain uh... a functional a functional service that implements it we thought that it's not mandatory but even sourcing makes it implementable so if you don't have access to a lot of streams coming from your developers out of their usual path if you're trying to listen to a database it becomes a lot more complex you can miss things whereas if you're using and even store in right only you get the advantage of seeing all the uh... event and even correlate for let's say active uh... let's say activate and then deactivate and then reactivate and then deactivate be able to exactly follow the flow if you have even sourcing and you have an even handler that's measuring both streams at the same time it can have the logic and the timestamps in order to figure it out whereas if you do it from a database uh... decrypancy between the operations might completely make it go disarray so we thought uh... even sourcing really helps to do authorization in that fashion uh... and the other big thing is your cds need to be very powerful in order to to manage large fleets of microservices like that because if we saw before uh... we have like seven types of microservices in our infra and uh... in order to just generate a uh... one or two services uh... business services so you need to have the capability to be very very efficient in your continuous deployment uh... we're almost one hundred percent infrastructure has put on that part so we deploy that's how we deploy dozens to hundreds of services now and finally if we have if there are people that are interested in behaving or going in a similar way so our effort doesn't go to waste we can help and particularly if you want to customize your restack uh... we have it implemented uh... for everything from validating tokens to uh... authorization and open-id engines with hydro to kettle for the authorization engine uh... we use the entire stack and we heavily collaborate with your group for that part and uh... again uh... pivotal uh... pivotal dot tech for our libraries uh... hope you enjoyed the talk so we actually have a bit of time for questions so if you have any questions can raise your hand or you can put them on slido and i'll read them i'll just check the website any questions don't be shy thanks actually one two that works all right so welcome to the last scheduled talk of the day uh... so nicole reguar has been auditing web apps for twenty years uh... he is an official birdseed pro trainer since two thousand fifteen there's a training uh... next week uh... so he trained thousands of people uh... privately publicly uh... other than that he runs agri one-man business where he looks for security vulnerabilities for clients and for fun uh... is public talks covering many subjects like ssrf xslt uh... brebsuit and all uh... many subjects have been presented in numerous conferences around the world we're very happy to have you nicota so take it away hi it's a pleasure to be here uh... once again i think that's my third edition uh... so i did a similar talk ten years ago like uh... plenty of tips so luckily we have a different content uh... for this year so uh... quickly introducing myself i'm french you probably notice uh... the accent already uh... i own uh... a company like a one-guy company and uh... asset during the introduction i'm official bird suite uh... training partner mostly for europe but given i love canada so if i have customers in canada that would be a welcome and uh... i trained nearly one hundred uh... people a year so that's a lot of uh... uh... i mean anyway uh... you you are here for the the trips uh... the tips so uh... what is the plan first i have a few tips for core tools so tools which are here uh... by default then i will discuss a few extensions then i have a few other subjects uh... depending on the time left and uh... and after that we have beers and the ctf and uh... enjoying montreal by itself so uh... regarding proxy story sss uh... my point is to be as lazy as possible so if i have to scroll through the results that's probably uh... useless and i want to avoid that and uh... if we look at uh... bird suite the default uh... sorting order in proxy story is the oldest on top so you are constantly uh... scrolling and scrolling to the newest content so the solution is super easy you can simply uh... double click here have the newest entries on top and that's uh... quite uh... comfortable it also work for logger uh... you can apply the same sorting and logger plus plus two in short if you are scrolling to see the new results you probably have to change the sorting criteria okay uh... second problem in proxy story you want to map uh... a specific action like clicking on the button in the mobile app or following a link to a specific set of resources so uh... what i do for example in proxy story i will tag the topmost entry then i will uh... i will do something like uh... i don't know uh... accessing a website uh... a complex one like stroller and uh... here i can easily map so everything which is higher than the gray line is related to my latest action and a variant of this strategy is that i will intercept the traffic uh... access uh... specific host switch back to proxy and here i can color uh... colorize directly from this menu so it's uh... the same uh... outcome but with uh... a different workflow and i switch back to proxy story and i can see the first the very first uh... request is gray green sorry okay that's some beginner stuff uh... let's discuss repeater so in repeater on the left we have the request on the right we have the response and quite often we are interested in a specific uh... location of the response so i have an example for that uh... okay uh... scroll to match okay this is uh... web got so that's uh... vulnerable uh... web app uh... we can see here and uh... this parameter is used in a command and the command is output it uh... somewhere in the result uh... in the response so the less efficient strategy would be to scroll and simply uh... go to the location uh... something a little bit better would be to search you can see uh... at the at the bottom i can go directly to this entry but every time i uh... send a new request i have to scroll again so behind the cog you have uh... this uh... entry and then the response will be scrolled uh... every time you have a new content so if i submit my request i go directly to the location i'm interested in so i don't need to scroll and i don't need to spend any time uh... looking at the i mean looking for the interesting piece of data uh... what else uh... since a few years or months we can create colors uh... i mean create groups and uh... put uh... repeater tabs inside groups so that's very uh... useful uh... a related uh... feature which is little known uh... is uh... odd key for search uh... or search tabs exactly and i use control shift s and uh... we get a list of all tabs so we can uh... just uh... uh... navigate using the up and down key and if i type some text like paper i see directly is uh... and only the relevant entries so when you have fifty or uh... eighty uh... entries that's very very useful of course you need to put uh... correct names but i mean you probably have to do it anyway okay let's discuss intruder in in burp sweet pro and not in the community version there's uh... some interesting features uh... in intruder when you are using simple list as i do here uh... you have a menu uh... top uh... drop-down menu with uh... plenty of word list and uh... so that's uh... useful by itself and uh... it's possible to customize the word list so from the menu bar i go to uh... configure predefined payload list and here i have i can use the built-in this is a default value or point to a specific directory that's what i do here here we go and now i go to the same menu and i have only my own word list and if i want my own plus the built-in once i can go back to the same menu and uh... click on copy which will be uh... export or dump and that will extract the uh... built-in word list directly to my hard drive so it takes a few seconds here we go and now i have the built-in word list on top and at the very end my uh... my own uh... word list uh... yeah what else uh... so this feature is the fact that we can customize the word list and there's uh... built-in list that's very nice let's say something uh... uh... negative uh... no uh... there's place orders in word list so let me show you them uh... if i use uh... fuzzing full uh... you can see base here and if we go uh... below we can see your email here or your server name here and uh... if you want to fully uh... use a word list you need of course to replace these values with uh... real ones and uh... there's a few uh... payload processing options relevant to this feature uh... so you on the screenshot you see the first one will replace base with the base value stored in positions domain will use uh... unique uh... collaborator hostname and for all the other place holders you need uh... to customize manually so here i will replace file with etc password and your email with my own uh... email address uh... it's currently a mess as you can see and as listed on the previous slide the uh... syntax uh... we have uh... curly brackets versus angle brackets and we have file versus known file so they they have to uh... clean that uh... but in all in all cases we have to manipulate uh... manually replace the values and i think that plenty of bugs were missed just because uh... users were looking for dot dot slash dot dot slash dot dot file between curly bracket and never finding the real uh... file of course okay uh... something about collaborator so collaborator is a way to get uh... notification uh... notifications from the web app and uh... a very common assumption is that if you want to get a collaborator ping back you must use the collaborator domain name and we will simplify we will consider only the public collaborator server so uh... is this assumption uh... really true that's the question and you can imagine that if the answer would be yes i would not have uh... this content in my slides so the answer is no uh... or it depends for dns uh... ping backs you must use the uh... collaborator domain name for http interactions you can uh... use any domain name as long as it points to the correct ip i could do it live but uh... i mean i did i did it live five minutes ago uh... i will uh... take a collaborator hostname so that's the public server and that's my collaborator uh... id or uh... hostname and it points to several ip addresses but we will uh... use this one forty four dot seventy seven etc and we will use another domain name uh... resolving to the same ip so uh... nip dot i o is a free service and this is the ip address x anchored and this is just a random string i mean not so random of course uh... so we are a domain name we have a domain name pointing to the collaborator ip address and my uh... hostname is rsn etc etc and uh... it works surprisingly well here i will simply access collaborator via my own domain name and put my collaborator uh... hostname directly in the bus it could be in a parameter name it could be in a parameter in a parameter value and as you can see the interaction is correctly uh... linked to my uh... own instance of burr suite we can see here rsn this is my collaborator uh... id and uh... we can do uh... i mean there's plenty of ways uh... that's another one oops my bad i use uh my collaborator id as the user adjunct and once again the traffic is correctly correlated to my own uh interactions and i mean if the web application firewall is looking for the domain name here there's no way to find it okay so we have a clean bypass in in most situations okay let's discuss uh extensions hackvertor is a swiss knife like you can do whatever you want it's uh... xml or similar to xml uh... tax and you can uh... change them so you can apply several transformations uh... on the fly and uh... that's a basic example um... here we have a string and we will uh... compress the string and the resulting uh... binary uh... data will be uh... base 64 uh... anchored and we get something like that so that's a minimalist example uh... we can generate fake data and that's very useful when you have to generate uh... unique uh... values if you are creating users uh... or let's say uh... files via an api you probably have to provide unique file names or unique user names and you can use uh... the fake uh... fake hacker fake book fake company uh... tax in order to generate uh... like uh... valid data but uh random one like uh... in this example i'm just asking for sentences and we get a few uh... sentences uh... generated uh... on the fly we can go uh... to more complex solution here i will uh... anchored my email address and uh... put that in a viable called email and uh... the flag here means the viable is global so we can reuse it anywhere in burp in a different tab in a different tool and uh... somewhere else possibly in uh... intruder i will uh... get my uh... viable and maybe iterate on uid for example and everything is in a jwt tag and um... hack vector will generate uh... tokens on the fly so as soon as you are you have been able to leak the secret key you can generate uh... tokens on the fly simply using intruder plus hack vector uh... if if there's anybody doing uh... http smuggling uh... you probably know that uh... managing size uh is a mess because we have usually two sides and we need to manage them dynamically so uh... it's a complex hack vector setup but it does exactly that uh... we have some text here in the middle and uh... on the line above we will get the hexadecimal size of the trunk and here in content length we will have the size of the size so the length of the size plus two using the arithmetic uh... tag and i think i have a demonstration not sure if you can read anything can you read something yeah okay uh because uh hack vector we can't set the font size so that's a problem but uh as you can see here i have a short string the size is eight here and the content length is three but if i add some um characters okay we can see that now the size is 26 so that's one extra character and uh the content length is now four instead of three that looks like uh nothing but then you can uh pay your attention to this section because maybe you're exploiting a complex bug and you don't want to spend any brain power uh managing the size uh manually and we we can go very far so i will stop giving example uh yeah that's a specific application where you need to sign the body of the request with the CSRF token and that could be done on the fly with hack vector okay and there's much more we can execute Python code we can execute system command like cat whatever uh we can access the execution context so we can have the url uh the value of a specific parameter i mean that's uh i mean the the more time you spend with hack vector the more you like it it's really good uh there's a big disadvantage uh using tags will break uh burps in tax parsing and uh that has uh a few side effects but uh yeah we don't really care like it's uh not uh something that will forbid us to use the extension okay piper i need to go quite fast so piper the id is uh interesting you can execute anything running on your workstation or laptop directly in burp so it could be an interpreter like python it could be any command line or GUI application you have locally so i have a few examples uh let's switch to burp um okay so uh on the right i have the response which is a big blob of json data and i want to make sense of the data so uh in piper i will enable the groan entry and uh the configuration is very basic if the body starts with a square or curly bracket then i pass the response body to groan and the result it appears directly in burp and that's all that's all we need here i have a new uh uh tab here labeled groan and if i click there i see the response body processed by groan and i have nothing else to do just define a filter and define which command should be executed uh if you prefer let's say uh jq we can we could have exactly the same config for jq uh that that was groan and uh ocular so ocular is a pdf reader but we don't remind okay any pdf reader will work and uh we have a simile similar configuration if the response start with a pdf tag a pdf magic then we uh enable a pdf reader in the contextual menu so here i can go to extension piper and uh given we have the pdf magic i can directly open the response in uh file viewer so if you are uh processing a lot of complex files uh it's very efficient compared to exporting to file removing the headers changing the extension etc etc and the last demonstration for piper uh we will use uh we will compare some entries so i'm not very uh happy with the built-in comparer and uh i will uh take a few uh sorry three requests so the three uh yellow ones here i right click and here the menu uh will appear only if we have two or three entries because meld can compare two or three files and i will compare the requests and i get directly the files i mean the traffic saved to to disk and uh we can see the file names here temporary file names and the meld command line is generated on the fly then executed so that's very very uh convenient okay uh berbunty it's uh an extension uh used to write your own uh scanning check but i think it will die soon because uh we have a core feature called beat checks that should be uh deployed i mean released in a few weeks and uh then you have a scripting language and you can define your your payload here i mean your attack then you define how to identify a vulnerability and you have some meta information here and i hope the community will uh share this kind of uh of uh recipes uh like publicly and you have a link to a video describing the feature in the nuts what else uh i need to go very fast uh keyboard shortcuts keyboard shortcuts there's plenty of keyboard shortcuts if you want to be very efficient you need to use uh a combination of shortcuts so uh switching from proxy story to repeater that's free action sending to repeater we have control r switching to the repeater uh tab that's control shift r and emitting the repeater request that's uh control space so uh if i do it uh i pick an entry and uh i will use a shortcut so control r control shift r control space and it takes like one or two seconds uh to switch from proxy story to repeater and uh you will use muscle memory so you you will not think about about the free odd keys it's just a complex odd key bringing you directly to uh to repeater uh poor man automation that's when you are looking for bugs like in bug bounty but you you are in holy days or you are coming for for north sec and you want to look for vulnerabilities anyway so we need two ingredients we need a life task in burp uh and we need a very specific configuration uh the life task will monitor proxy story and every item appearing in the proxy story will be scanned so in most situations that's very dangerous and uh we will combine that to uh with fuff and fuff uh we will use a specific option i will show the option uh which is dash replay dash proxy and all the interesting entries will be relayed through a proxy so we combine uh fuff here looking for a status code to oh and every finding will be uh piped i mean will be uh forwarded to burp and in burp we have this configuration we have a life task uh monitoring the proxy whatever the scope is i mean whatever the host name is will we will trigger an active scan so uh for sure that's not very advanced like it's not a real pentest but if you are uh in holy days it's better than nothing and uh you could even run fuff on a vps and forward to burp through ssh listener so you have burp locally and you have fuff on a remote server and they are talking one to the other that's quite elegant okay uh not sure how many time i have uh left uh something about performances i have very very often uh some feedback about uh burp being uh resource intensive my opinion is uh very clear uh computers are cheaper than brains so you can send this slide to your boss or manager uh you want a computer which is larger than necessary and um whatever you are doing like a burp or running vm etc that should not be something slowing slowing you down i mean it's already difficult to find vulnerabilities so having a decent computer i think it's a good idea um how to stay up to date uh so of course this is just a few tricks uh port trigger uh has a channel on youtube they have very short videos and very long ones both are very good uh they have a bunch of uh twitter accounts uh plus all the accounts all of the employees themselves uh i have a an account a twitter account dedicated to burp so i have my real account and this one is exclusively rated to burp tips or links or whatever and i'm nearly on time so if you want to access the slides they are already online so you can you can just get them uh and uh what else if you want some i mean if you if you like what you what you saw uh i will release a workshop for free next month so next month uh there's a free online conference called naamcon and i will have uh i'm not sure 60 or 90 minute workshop uh related to session management like cookies and uh json tokens whatever and so that's a bit later and uh i think i i'm nearly on time uh thanks for listening and uh if you have any questions they are welcome if we have the time and i will be at the ctf uh friday evening and we with no no goal to score or anything so if you have any question any questions relating to to the challenges and or could we automate automate them that would be uh like a fun uh subject thanks for listening we probably have time for one question anyone wants to be the one yep so if you want to take a picture of oh yeah thank you if you want to take a picture of the links of the links sorry it's already there is that the only question that's a good question so okay then thanks nicola very good i learned a lot uh so i'll welcome uh necessary cinema from a secure works uh the secure works is one of our sponsors and he just wanted to take a little bit of your time to tell us uh about secure works here we go oh yes thanks okay can you hear me now yes you can hear me okay so last talk of the day very short one only five minutes uh so my name is dr mr junima nice to meet you i also known as dr azurady and i flew all the way from finland here yesterday to give you a um works of tomorrow tomorrow morning 10 o'clock about azurady and tokens so why i'm here i don't know to be honest but uh it turns out that we have this kind of sponsor talk here and i was here anyways so my colleagues from from montreal asked if i can talk about something so i'm going to talk to you about something not about secure works but something else so how researchers as me so i'm working as a full-time researcher so how i as a researcher can actually defend every corner of the cyberspace so let's start with this pyramid of pain i think some of you have seen this before and it tries to describe what is the cost of chains to adversaries or threat actors if we change something here so for instance uh there might be a threat actor who has a piece of malware and you know the hash of that malware so if you uh now detect that and then you like this have hash value here and then you like uh block that it's quite easy for threat actor to change that malware a bit so that it will have a different hash right so where should we focus then so instead of this easy to change stuff or something more trickier and yes we're gonna focus on the tough which is the top of the pyramid here so gtp so tactics techniques and procedures so those are the hardest part for threat actors to change actually so even though you change everything below that these are still gonna be valid so we should focus on those but those are not easy right they are they are actually quite complicated so what do you do well you call the researchers right and we come to the help help you out so yeah uh who of you did so the key talk keynote today okay so that keynote was given by my friend Roberto and now i'm gonna actually show you a couple of slides from a black hat europe talk me and robert to give gave uh last december so the talk was about um how to uh prevent golden sammel attacks so that was pretty much the the whole idea there and here's the how we started so the whole golden sammel attack is based on one technique which might record six one six four nine or steel or force authentication certificates so that's the technique they are using so it would be so this is the process how to do that so you start the attack you export the tokens on your certificate and that's the end of the attack so as a technique that would be easy to block right but no it's not that easy it's it's not that just that because the adversaries can use a lot of different procedures to get the same end goal right which is to export tokens on your certificate and here is actually the whole attack graph as it turned out so i've been researching this many years and i'm a good at you know attacking micro stuff and roberto is very good at detecting stuff and together we are quite good to mitigate stuff and prevent stuff so actually after or in our talk in black hat we were able to block all known attack vectors so now we are safe and you are welcome by the way so so that's what we did and um i'm actually getting close to the end of my talk because this is very short that's a couple of minutes and i'm between you and the party so i'm gonna be fast so the key message here is that things are complicated they are getting more complicated so you need researchers and then when they are collaborating like me from secure works and roberto from microsoft then the magic happens so you can actually do this kind of stuff so i found stuff and then roberto said i can detect that and i was damn how about this one i got you so that kind of things when we do things together we can get it like this okay but before uh letting you go uh so i flew from Finland and i did a little bit of research and i learned that you canadian people also are mad about hockey well so are we so i know that you guys know that's how finnish nestling team uh jersey looks like so i wanted to make it easier for you to find me so i'm gonna wear this tonight so if you want to talk about this research or my talk tomorrow you can just find this guy okay thank you have a good night thank you thank you the story am i okay thank you the story thank you secure works uh you're gonna also give a workshop uh tomorrow so if you want to hear more uh from him be there um tomorrow morning i think right yeah good uh so we're gonna take a short break and we're gonna be back with uh with a special panel with the about the history of norsek from the founders there's gonna be a a lot of fun stories uh so yeah uh and reminder there's gonna be food uh so uh everyone's gonna be fed there's a party after there's drinks and there's gonna be a band on that stage i'm standing on right now uh at eight at eight p.m so uh please come it'll be a lot of fun this this this all right uh now that olivier uh he's hidden behind the stage uh so this is special this is for the 10th year anniversary of norsek uh olivier bilado which most of you probably know uh he has uh given uh a qna on red teaming last year without even being the moderator so uh you'll be able to entertain you for for one hour uh please give up for the president of norsek elvie bilado thanks hey hey can we put the the photos all right okay so i have the honor of doing this 10 year anniversary but it is also a burden so this is an old scoreboard scoreboard i guess uh it is also a burden uh because i'm here to host something that i didn't create okay and something that many people contributed to many people spearheaded and i know now how much work this represents um so we will uh basically feature today in a talk show style laid back the guests uh not the guests the creators of norsek and we'll have them you know one by one and add to the to the party and we will go through uh three eras of norsek okay by presidencies um and uh yeah i would like to to showcase their work but we will also take questions from the audience uh if if you want um we are we will be allowed to uh speak some french so bear with us uh we will try to keep it english uh you know for accessibility reasons because because we're an international event we have people flying over all the place uh to come here and we would like to have more and more of that if we want to be the biggest ctf in the world uh so uh so we will try to keep as much english as possible but i want my my guests to know that if something's funnier in french feel free to say so all right i will uh invite up on the stage right now amandine uh gagnon et bah who is our vp engagement uh amandine are you here simon you need to find amandine we uh we were uh they were briefed they know they should be here but apparently it didn't work out so we'll skip her and have emil first and then we'll we'll switch back to amandine so is emil in the house yeah sit right here man so emil is our uh vice president of the competition which is our weird way of saying ctf because ctf translates badly in french um so emil uh how long have you been involved with nord sec and what was your path to nord sec so i think the first time i i participated at the nord sec was somewhere in 2014 i think uh as a participant i really i was really bad and i i i was very very bad and essentially after like one day of full ctf and trying to do to find any flags i think i found like one flag in the in the rules um and then i decided to get drunk in the rules you found the rules flag yep okay and and this is the path to vice president uh we miss you interesting all right um any do you have any specific anecdote you would like to share uh tour about the ctf like what's your what's the moment that comes to mind right now for you about the cdf uh i think that my my best moment for nord sec it's a and it's a tribute to simon which was the vp uh ctf before me uh is that like i remember coming in and just like seeing the the room and everything happening at nord sec and just i think simon you know this guy is coming to my table and be like hey do you want some bread i don't like a what and like it was kind of a very fun experience to see like that nord sec was not only just a ctf that you you stay behind your computer and and do like you know find the crypto 500 and the whatever is it was more physical there was things happening and yeah it's a full immersive experience it was a ctf but it was also a bakery is what you're saying yes yes so for those who don't know we made fresh bread for the year that the theme was addison bakery which is a play on the addis ashley addison ashley madison leak uh i that's for saturday man keep it clean um yeah so uh interesting because it smelled the bread the whole year and they were doing like it was pierre davis sourdough and they were doing this in large bins it was a bit crazy operation um i have a one last question for you emil what are your thoughts and this is serious what are your thoughts on elitism versus accessible challenges like what how do you see the balance it's a good question i think um like elitism is not like a term i i like a lot it's it's nice to have art challenges i like to see people having difficult like spending a lot of time on difficult challenges and and have a no a good nose bleed on a reverse and everything i think like that's super important it's part of nord sec's dna to have like very difficult challenges but i think accessibility is the path to that makes like people like me that sucks at ctf be like interested and involved and become more part of the community so we do try to have the more the more and more challenges that are accessible to like more beginner that are more beginner friendly i think overall the ctf we wanted to evolve in a way where you you could like if you're really here for the elite and like competition and be super serious we wanted to be i mean we want we want you to be able to say that you're here for fun and that we're able to like give you more hints or like coach you a little more about ctfs and everything without really enduring the real competition that's happening behind it interesting insightful thank you um uh emil i will ask you to step aside and we'll have amand zinn gagnon ebar who will take your seat no no you're going to move from one place like in a talk show man have you ever watched tv that's how like they rotate in uh during the ad unfortunately hey amand zinn thank you for being here hey i'm i laid because i know that someone like they run after me like you're asked i'll leave yes for you on stage but i i'm too busy for for being here so it's okay i'll take i'll take 15 minutes of your time five minute talking um no so amand zinn you are our vice president of engagement and outreach am i yes okay um you uh so you uh okay the question is how does one go from a phd in psychology to be becoming a full-time pentester oscp certified all while maintaining clinic clinical practice with you i don't know man i don't even know if i sleep no i sleep very well i think that uh the the secret for this is uh just being a a good time manager you know and prioritize things that you want to do for example uh north sec this year uh and the year before uh i don't have time to be here but i'm here anyway so i missed some shift at the dpeg for being here so uh kids uh are not my priority today but uh yeah so i think just setting priority is my best i think i have a adhd too so i'm just like always doing something and i don't know i'm passionate about what i do but you've been with us for two years yeah which means that the adhd adhd you don't know psychology man i don't i don't adhd yes sorry so it it's still like you're still with us after two years so it means that this community is doing something right for you is it i don't know no no absolutely i love uh being part of the community i have a uh an extent background in being involved in what i do and uh i love the i like that we can speak in french a little bit a little bit but yeah bilingual good uh i like that uh the opening of the community in infosec i like the fact that i meet with people from various horizons and not who have been uh all this you know we have winter courses comparatively in the academic environment and that's what i appreciate the most uh i'm back here i know like i never participated before as a participant before becoming responsible so last year maybe it didn't seem like it but i didn't know absolutely not what i was doing i was telling the people what to do while i was trusting them but maybe i'm fucked up and there's a photo of you very confident there in the slide deck at the end there absolutely so it was a discovery extraordinary i meet people uh i'm social about uh 14 days a year and it's during north sec after that i'm going to my land but it's the moment where i really give myself a lot to meet people and connect with with with our friends what yes thank you beautiful answer yeah so there now my last question how do we do to make other little amandine what is it what do you recommend to the young people who want to get involved like which way to take to make amandine you need a dvp in boulangerie why am i not surprised it's like the parkour the most yeah uh how to do i don't know me i i it's going to be cheesy what i'm going to say but like i i i don't recommend it you know because i encourage a lot when i talk about performance and people are like oh my god you do a lot of things and you but for me it's an equilibrium of life to be at the same time involved in psychology because i switch from domain all the time and i need to change my mind so when i'm here i don't think about my children from the dpg i don't think about the clinical psychology and the opposite is also true when i'm in psychology i don't think about becoming an admin domain we hear it it's not my priorities are here there are for who is to rest it's to do gardening it's to play chess it's to do the bike there are for who is to play video games and all all the things all right you know but for me like it's two careers that i like i run in parallel and then it's my way to find my life team there right it's good it's good how to do to make amandine i don't know you will ask my biological parents there i have no idea well in fact i hope that your story will inspire others and they will have their own dp their own way but i think it's an inspiration for the future generation and it's important honestly i recommend it to everyone there i'm going to have a priority for new york there apply for new york if you please but apply also elsewhere in the community that you are three years old that you are no matter what part of the initiatives because it's paying it's paying tabarouette there for the number of connections that you make meetings that you can do capacity that you can develop by doing that make new discoveries it takes time we are tired we don't sleep but it's it's it's very the recognition is there i'm too tired to speak in english i'm sorry to all of you who speak english but we don't understand what with absolutely but thank you very much i would invite you step on a side thank you and i'm going to call suhira suhira can you come up on stage please suhira is a former ctf participant that is now part of the outreach team oh microphone someone's sitting on a microphone don't we have four microphones are we missing a microphone no i think you are sitting on the we're out ah yeah here we go all right thank you sariva so yeah i was saying former ctf participant now outreach team so you're a season threat researcher you presented earlier today gave workshops in earlier years and are part of the outreach teams i found on linkedin that you did a bachelor's degree in bc it's a loaded question um what convinced you to move to montreal and how did you discover nord sec what convinced me smokemate smoke me oh interesting oh well it was actually like i got a job opportunity i could work as a contractor at google and so i made the move and i was like why why not east east coast you know it's just cold french what could go wrong and i'm still here somehow yes well that's that's an amazing story yeah we should clap for that um so in 2018 nord sec had a sudden death the two first places of the ctf had accumulated the same number of points so first place and second place were equal we had to make a decision on who wins pierre mark burrow created a reverse engineering challenge at the last minute and you're on stage with zack devaux who's here from goats facing another team doing live reverse engineering like how can you go through that amount of pressure you don't yeah you just don't i think if zack was not there i would probably be more frozen it was it was fun it was it was weird i don't think i'll ever do that again but uh yeah it was fun it was fun to win the point you did win yes um and it was my first ctf so yeah for us like for us organizers this is a key moment in nord sec history that we wish we would have filmed it's like something unbelievable happened we were wondering what to do we had we thrown ideas and at some point one of the team thought that they got it but then pierre mark verified and it was not okay but you guys kept going on and then you got the actual real flag i was like oh my god there's so much pressure and i'm in the audience so congrats on that uh do you remember any of it or is just like a blur i think it was a linux l file as rc foci uh zack had a radari open i didn't do anything i just stared i might have a side anecdote about okay apparently not true i blanked out that memory so yeah we did realize afterward after after the sudden that happened and everything and everybody might remember this for the people that were involved but we did realize that one of the teams that were involved in the sudden that did not submit the rules flag and with that flag the losing team didn't submit the rules flag like it would have been over before it even started amazing all right last question for you well actually you know i'm always doing like double questions or whatnot but what would you describe is the biggest challenge working on the outreach team uh i i i guess um biggest challenge uh maybe seeing a lot more i don't know like women type run workshops i guess i'm really bad at this um yet uh i think that's a bit more it'd be challenging and it's also i find uh i don't know if it's only like in montreal i find there's not as many women that you see attending workshops and it's trying to like get them involved in it and just creating a space for that but to see more women uh yeah it's difficult yeah all right but good job trying to achieve the difficult so uh we i realized i miscalculated we have a chair problem that i'm gonna smoothly handle like this i'm gonna call on stage serge olivier paquette who will take my chair and we have another microphone we have another microphone we don't even can we or get one for the next 15 minutes but for now we'll just borrow one from uh from emil from the overexposed people all right thank you so serge maybe we'll keep it like that we have another chair but we'll see you want to sit on my lap oh yeah we could do that strong enough so serge don't read the questions so serge uh what were the challenges of your tenure you delivered 2022 and 2021 as president yes after a couple of years as logistics there's one big thing that happened i don't know if people remember when people were coughing and or staying home that was quite a challenge as a president keeping people together motivating them to come back and spend some time when everyone was in depression that was something yeah well you handled that very well i'm uncomfortable sorry um uh too close you uh no but you handled that very well but people yeah of course don't want to remember covid and i had other questions about it so uh can you tell us something about each panelist here with us a story or something you've learned from them hmm he loves the snowboard so we're in the team of being on something that glides and um i have no clue about so we're i know very little about so we were unfortunately so uh she doesn't mind the cold of montreal and loves smoked meat all right uh okay last questions for the bunch uh you have a math background you work in software engineering data science machine learning i was told that you had imposter syndrome on your way to become president can you talk about it i still do but i learned to cope with it and i learned that everybody does also even those that don't look like they do and it's something to celebrate it's something everybody so the reason they go on stage is because they became the best fucking in the world in their domain of course you don't know as much as them so it's perfectly fine to not understand what the hell they're talking about just listen grasp what you can and be the best in the in your domain at some point in life and so that's it and that's what i i became i came to enjoy that feeling of being surrounded by people that are so much better at me at so many things and it's just something to celebrate and then the one of the reason north sec is beautiful yeah good job all right so i'll ask everyone to leave the stage and then we'll go with this the next era we're going from recent years towards all this years and i'm taking a lot of time i realized so i'll have to pick up the pace i would like florencia to come up on stage florencia hello hello so very exposed what weird okay it's weird yeah i mean come on you did stage time before yeah but but not in a chair i don't know it's weird anyway then are you the one who organized the the panels format the i never sat in them though it's completely different so now you feel like now i'm suddenly i'm sorry all right florencia needs no introduction she's been running the conference for the last four five years and started the year after pierre david started it so pierre david just did like a napkin project and then florencia delivered an actual event conference so you were tasked with this big and vague request of making a generalist cyber security event tied to a ctf and making it diverse so the question is how did you approach this am just trying a lot of things and seeing what sticks i think the the trouble with being a generalist conference is no matter like you you're just you're constantly kind of trying to catch up with yourself you're like oh we got a lot of applications from people in reversing now we got a chase after blue team people um so a lot of that just a lot of chasing uh i guess running after people that we think are interesting watching uh conference talks from other conferences and and doing our best especially to try to um look for diverse speakers which is not that easy and then you know do you just have to hope that they actually will apply when you ask them to great answer um i would uh i would like you we kind of have an opportunity here to describe the call for paper process in a laid-back fashion we never actually did a blog post about it or whatever because it's it's a lot of little things right and and you ask anyone on the committee how it works and they will say slightly different things so i would like to have your view of our of north sex cfb process like how does it work so we have two or at times we've had three rounds of submissions and that the reason for the the two rounds is that we try to get hype when we announce the first round of speakers and also because we do want to kind of target the the later rounds so as an example yeah you know we get a lot of applicants who are red teamers then in the second round we want to kind of balance that out and we want to kind of make that all work um but the way it works is we have this open source tool that we use and people submit their applications which includes you know whatever they want to talk about um the full description any materials that are related to what they're proposing and um bio etc and then we have a committee of i want to say it's like six ish people um it varies from year to year uh that will go through and vote on you know how they feel about the proposal so we do that uh it's not fully blinded at this point but uh we we do not see each other's votes so everyone votes and then we uh afterwards we we kind of look at all of um how everyone voted and we discuss the ones that are uh that have high votes or medium votes and then debate and then kind of pick and then try to make it all work so we use Trello to kind of arrange it all and see if if the the total program makes sense is that with that cover it that's perfect i'm gonna clip this and we're gonna put it on our website or youtube or whatever it's so much less time than writing blackpost great great answer i thank you flow i'll ask you to sit on the side leave the mic and we'll uh call on stage Eric boivin Eric boivin scenario lead is the title i think you are attributed yeah game master scenario design or something like that so what is the job of a scenario lead except yaml it's mostly yaml but yes um the job is to take the the technical challenges of every challenge designer and then to wrap it up into the entire north sec team because you know this is an event that is it's not only the ctf there's the conference there's all the the swag that you see the t-shirts there the badges the team is everywhere in north sec so my job is to first come up with an idea then work with the challenge designers to make up that idea to create like an entire universe based on their challenges so it's it's all about taking their challenges building a universe around it and then wrap it it up so that when participants go into the ctf they can see a real experience they can see something that is cohesive that is not just oh this is an sql injection this is a reverse challenge it's a world that you are part of and you're building a part of that story so something like that do people even realize half of that or uh i would say maybe not i know there's a lot of people that are very competitive and all of these messages they don't care it's i this is noise we don't care give me the url and i'll go to it but when you're building the signature of an image the the team in north sec is everything like the audio that you hear the visuals that you see the t-shirts that volunteers so in the end that creates an event that is interesting so this is why i think even if the participants don't understand the big picture they can see all and the color that we're putting into it and i find that absolutely fascinating all right i'm gonna ask a trick question which theme was your favorite so far uh oh this is yeah they each year they are my babies some babies are more troublesome to put into life uh some babies they do what do you know about that yeah some of them are very challenging but i would say my the one i'm the the proudest of is north sectoria where we did an entire ctf about medieval cyber security it was so wild um i've never seen like cyberpunk is easy it's part of our culture us in info sec but medieval cyber security that doesn't exist so yeah i would say the opportunity that i have working in that event and creating crazy stuff like that is absolutely amazing so i thank you all for for that i you just mentioned it and i have this song in my head yeah yeah a little whistle and uh yeah all right um about this year's theme any surprises left or hints uh to be honest um this year's team was supposed to be last year's team so um we were not sure last year what we would do with north sec so last year's theme was pretty much build up quickly so that because we we found out that oh yeah we can hold an uh in person event so it was really last minute so what you're seeing this weekend is put yourself back in may 2022 what was the life that we were seeing as a society coming out of covid so this is the kind of ambience i had in mind back then but now this year with all that change all the things that that have happened things like the democrat characterization of ai that's something big that has happened this year so you will see little hints about that that couldn't have happened last year so this is why using current events into the theme is something that i absolutely love thank you no so no surprise no hand no reveal no like i want insider juicy um we didn't rehearse this clearly leak something man oh peer pressure okay um i i will leak uh he will leak so you know um there's there's a lot of iconography that is very weird very illuminati very mystical well keep that in mind in everything that we've written in the end there's this mystical power structure that is controlling everything challenge designers might not see it but you will feel the wearness and uh yeah i can i can't wait to talk about you all about what we've created challenges this year are especially good and the integration with the team this year is absolutely great so it's an honor to showcase that this week excellent so please if you may please step aside i'm gonna invite danny to come up on stage we were speaking about sound and music earlier and so danny is a sound designer and dj for north sec which is uh something i think we should promote more our art the connection we are trying and we're not good at it but the connection we're trying to have with art and sound and music um so danny you did a live dj livestream on twitch for north sec 2020 at the peak of the pandemic when we were in lockdown how was this experience like from your garage i remember like drinking in front of my screen with like being on discord with the other buddies and listening to the music and having fun what a weird weird time that was um and i was just shocked by the amount of fun that i had uh because we had all been in our holes and then the idea of like we can't do north sec and i see flow freaking out and we can't do north sec and then it's like no we can we can pull it off we can pull it off digitally but having been here before and seen the vibe in this room when people are doing the ctf to imagine the ctf without that vibe was heartbreaking but then to be able to contribute in some way to create some kind of unifying thing where like where people are listening to the same music and they're kind of sharing that same spot um honestly i like i felt it and it was it was a joyful thing to do were you like uh looking at the chat at the same time or yeah okay so in between tracks uh adding yes so um uh uh before doing so and after doing so you also designed the sound atmosphere yeah so i might i don't recall exactly but i remember like it started in 2019 because we needed royalty free music for our our pauses and our stream and then the severity high in 2020 we're already planning at a dj so i think you did some sound for us and i am am i wrong saying the medieval is your stuff so this is your stuff and we might have a surprise for this year um uh and uh so what is the process of creating a sound atmosphere well you get really really weird prompts like medieval hold music or like you know you all heard the leak today the leak was it's going to feel weird so you know getting getting a prompt like that is honestly just like the most like it's my idea of fun is being like okay we need basically filler music but you know here's a twist or two and i think it it infuses art into the tech five that we have here and honestly like you know you can get you can get hold music royalty free for free on the internet very easily but you could also like if you really need a name tag you know you can get paper from the dollar store and write your names on it but like north sec clearly feels the need to have these badges the blinking stuff exactly and so i just feel like that's kind of the the spirit of this event is kind of like let's go over the top let's bring it together uh let's contribute our own favorite kind of weird and uh little hold music is how i get to the little how i get to feel important and sit on this if i may there's a there's a saying that i really like um uh restriction breeds creativity is that when you're giving like limits this is when the mind can go and fly so this is why working with with the set and working with her like the the prom for this year was so weird and the delivery was amazing i can't wait for you all to hear how i think it worked today i think it played today played at some point yeah yeah this is for the conference right when we have pauses did it play today do you guys know oh it played in room b today so in track track b room track 2 room b we call it many different names sal debal is the website's name um yeah i would like i made some room i would like to invite the president of that era or representing that era pierre david on stage can can someone all right all right i'm gonna sit there you can do like surge but i'm getting older so there's much less material to work with nah i'm not doing that all right i'm gonna be right here yeah don't don't move all right by the way i mean this is very minimalist cue card um so you were president during 2019 and 2020 10 years yeah uh what were the challenges of your presidency i don't know something that ends with 19 oh another covid president yeah i can count that's a good story so um how how this happened right so remember everybody put yourself back in your shoes early march 2020 right early march this is you know hearing about things back in happening in asia you know whatever it's all like a flu right it's all like you know we've seen that and so um we're having an event on site in person everything is lined up everything's got to be awesome we got badges ordered like you know a thousand of these a thousand five hundred of these like 70 grand right all in and uh you know i'm writing with uh with my company we actually have an outing we don't do skiing you know in uh in charlevois right and so um we're writing there and uh you know mr françois lego the prime minister of cobalt announces that well you know this is on us right things are happening and so we you can't have any gathering indoors more than 250 people right live when we're freaking writing you know on the highway like on the 20 and then so we stop at a place and um this is where you gotta make tough decisions right because if you make the wrong decision you're losing the badges and stuff like that right you're losing 70 grand right you're there's a company in china that is supposed to be doing some things for you and you've already paid some amount of money and so um you know we had lunch i don't even remember what i ate what happened where and what restaurant we had lunch it had lots of screens that's all i remember because like lego was on all these screens and uh it was probably an ash no it was an ashton i don't they don't have lots of screens but you know whatever it is and so yeah we had to pull the plug on the badges that meaning we we were pulling the plug on the event right meaning we were pulling the plug on the physical happening of the event right uh march 11th right so like two days before everything was like cobec was cancelled basically like the entirety of the province was like indoors you know fuck you and that's it so um yeah so this is a live stream by the way yeah yeah so well it's you know there's a you know there's an llem on google stuff that is going to cancel this out they're very good and so um uh yeah so i had to i had to i had to call call the the batch folks and say well you know how much money do we do we owe this this company all right let's just cancel all that and so pull the plug pull the badges and call an emergency meeting while i was in a in a chalet you know in in like uh uh and call the emergency meeting and initially honestly initially my my decision was you know that as a president you got to make the responsible choice right you got to be the adult in the room right you we've got crazy ideas crazy concepts that you have to make the right choice and so as a president my initial decision was like oh well we'll just cancel it right it'll be easier for everyone but i as a president you don't um in a volunteer organization you don't rule by authority right that's a trick you learn over time is you know none of the people here that are volunteering are paid you don't have any authority so you have your idea but you're gonna you know have to see you know feel the room the first initial feeling of the room of the entire organizing committee at that time was we'll just do it online we'll just do it online i was like all right let's do it online you know let's not cancel it let's uh let's be the the first you know march march 12th you know let's do it online and so the entire team switched to put it online it was it was it was crazy you know hats and countless emails we received a lot of um i wouldn't call this you know not threats but you know people weren't too happy about us march 12th remember that march 12th shipping it online you know people were like ah you're you know you're making this uh you know what it's not you know it's you're way too intense it's just a flu you know it's gonna by you know by by may it's all gonna be done right it's gonna be done and over with you know and so yeah you know hence i 2020 uh we still made the right decision so i that was that's the story so uh had to call them you know pull the plug on i don't know 70 80 grand got a got a got a message from mauser you know we had like i don't know 20 grand of stuff ordered there are you sure you want to call you know it's uh you know we uh what what's happening you know you want to you want to cancel a 20 grand order what's the problem you know in covid so yeah that was quite a story sorry i took a lot of time to uh yeah by the way time's up for you but that's it this is what people were remember remember you for yeah no no pialla is behind many many interesting initiatives but anyway i didn't have anything prepared and we're gonna move to the next era the last era because food is ready and it's waiting so four more guests and then we're we're good um i'm gonna call up on stage genlieuve la jeunesse i see her in the back genlieuve is our cfp on the cfp and has been for a long time uh yeah a long time so she's part of the ogs not ogs but almost like the earliest of the of the crew more in the stale people in the cfp now i don't mean that i really don't mean that but i think it's great that we're getting new people is here for me yeah and you also uh launched or spearheaded the the community room which uh we're still unsure about calling community room and villages but yeah so about this i want to say you know i'm in the face of it right now but i martin right there at the back right in the alley hello dude you know it's it's a common shared vision by a lot of people within the organization community it certainly isn't like a one-person thing at all and you know in jean-philippe covp this year thank you because otherwise none of this would be there obviously but so uh yeah i guess we went through the intro um the so can you like there's there's this thing it's a new object and now we have the opportunity to explain it to everyone right with more than a couple of words uh how would you talk about the community room how would you describe it so do any conference you go you'll get the opportunity to experiment things you've never tried right but what we wanted to do was to do a little bit more than this right so it's cool you go to this room and you get to learn soldering if you've never done it or something like this yay yay but but it's that yay it's that moment where you're also learning you know what's in your environment oh i have this no there's this maker space in my community and i could go there or someone else is passionate about climate change and how it intersects with the info second so it's having that space to build those connections and not consider you know you're not in a talk so you know nosebleed brain activity it's not just about having a break it's about learning to engage and make this like you know you're here you're passionate about info sec you're passionate about technology it's a whole of life thing for you well can we enrich that can we connect that to other things and we and can we connect that to things that are a little bit lateral right if we were people were you know in computer science and the very you know straightforward thing about it we would go one way but being creative helps us in our work it helps us see things a little bit differently so you need to kind of enrich that with a bunch of things that may or may not seem like they belong because this is how you find out where you as an individual belong and also how you can include more people into this because otherwise you know we'll we'll keep doing things the same way and we'll repeat the same mistakes so it's kind of meta like it you know how you learn a thing that isn't what you learn easily helps you learn better things and more things and that's that space in essence and also you know I really really really like puzzles so you know puzzles everywhere life is a CTF anyway yeah so how would you like I think a CFP is very well formatted and people understand what the contract is you know you do a pitch basically and then it's rated and you get accepted but so the process obviously for a community room needs to be more handled how like let's say someone has an idea and is unsure if it's a good idea or not what would you recommend them do yeah so that's a a thing we're looking to build right the more we do community room the more people know what it is and then you're like okay I love sailing and I'd love to explain how this works and now you know this will become more real as years go by I'd say the number one thing is just throw yourself in the pool you don't need to be a group of people you can be one person two people and come in with an idea and it we won't like when you when you submit to the CFP we judge you right we want this to be watertight we want this to be perfect for community room or if you're thinking about something that could work or may not or you would do it but it's expensive and you don't have the resources to do it or you would do it but you know someone would be far better than you at this but you just want for it to exist then you know talk to us and the thing is CFP is great for talks because the talk you know you sit down you write it out and you're good but for these types of things that can take a longer time right you're building this out well maybe we need to talk in the summer so what I'd say is you know you're all on this court I'm then key with many eyes at the end on this court like reach out let's just chat you know there's there's if it's a bad idea I will tell you my I spend my days having bad ideas I have a lot of experience in them so don't don't worry about it you won't face me I have worse ideas than you do I'm sure so that's what I would say yeah so there will be as every year there's a CFP period but don't don't have to stop you you know and if CFP is closed and your idea just came talk to us it's you know we're in a huge space and even if we were to go elsewhere I don't see us going into a smaller thing I haven't seen Nordzak get smaller at all so so don't be too concerned about that what I would be concerned about is staying silent right if you went to the room today and we're like oh I wish I wish they had this like come to us this is you know there's a hacker spirit break things you know avoid the warranty do all those things there's also the maker spirit like get it done get get it done take it do it no one's gonna stop you social engineer your way into a better world I'm I'm kidding but I mean like like pick it up for real because we only have so many hands and you know we will truly love your implication into this yeah oh no I'm seeing my daughter coming there's another thing I want to say also like it's not just for what's in front of you it's also to articulate things like maybe you know I'm crying at the temples like if there's things you use to see in hacker cons and you no longer see it's culture we need to keep our cultural life so bring it yep that's a great great answer I'm gonna I'm gonna switch question last minute because I was going for the other one but I'll take this one as a parent oh no see what I'm going with a successful professional career involved in Nordsegg at the vice president's level um with limited time like how do you manage your time and advice for people who are inspired by by this club mate no uh no no the the real answer the real answer is like it you do something every day I and it's the same for my learning I have you know if it's going to be 15 minutes I look out my week I'm like oh this is gonna be tough but you create those habits and you stick to it um there's no you know there's no sugarcoating it there's times in the year where it's like it is a lot but it feeds you and I guarantee you if you're focusing on the mundane stuff that you feel you have to get done all the time it just drags you down but if you focus on the things that feed you and build you up yeah they're difficult like but but doing the difficult stuff is what makes it worthwhile right and if you're a parent you know it like there's a ton of the difficult stuff you would never question that it's worthwhile at the end of the day hello kid I'm not getting woken up at 5 a.m. to we're boarding yay no but I mean like the difficult stuff oftentimes ends up matching up with the stuff that is super worthwhile and go for it you know like I'm not I don't think there's anything exceptional and the things I do other than I'm exceptionally driven to get them done and it defeats me another great answer thank you so I'm going to have our next guest who needs no introduction Lauren Desonnier hi hello so Lauren Desonnier is a challenge designer obviously a long time challenge designer are you you're what's the first year you design challenges the very actually the hacker so before yeah minus one yes okay okay so you you didn't play in any north sec you were already a challenge designer correct all right so 10 year 11 years challenge designer um how many challenges do you think you have delivered in total excluding trivia's excluding trivia's I figure perhaps let's say six or seven per year so I'd say roughly 77 all right that's good at an average of of points let's say at three points average that would be yeah that would be your your like your career your north sec career is like a third of a north a regular north sec so the funny thing is and that's amazing about north sec is the very first year we were very few there was like Charles Frédéric was doing some and I was doing some but you know individually we did a pretty big part of the CTF because we were like five so if one of us messed up it was 20% of the CTF no for real nowadays we're like 60 so it's amazing how much this is more reliable how much we can rely on one another and that's basically built on this community so I am very thankful that I am a minute minute tiny part of north sec now since that there's so many great challenge designers around um what are your best stories around challenges anything comes up to mind there were some amazing challenges when I had well the personal one I had was uh I had a key grinding challenge and people brought dremels and stuff and what there was like flying sparks all over the security guy was super worried how come people were doing that was pretty fun there was a fabled uh bag of chips if you recall that there was a vibrating sack of chips that you had to um decode the word around that was really nice there were some really cool challenges but those two come to mind actually um I wanted to elaborate elaborate on your obsession about captures all right so yes I really really like captures I feel they're clever they're a good way to um bring different uh abstract challenges and you can talk about crypto about race conditions about logic flaws and nobody really cares about capture in real life but I am an enthusiast some would say fanatic and I really like to bring in 10 years I've done 14 capture challenges so yeah I'm a really really like captures that's a hint for preparation if I ever heard one the funny thing is one year I keep uh repeating that story the very first year we had you had to solve 500 capture by hand and the lannity s four thing or a team did it by hand like they had they solved it two people it took them their whole weekend because if you missed it once you had to start over and we gave them a python programming book as a gift that's another one another great story um how would you describe the approach to designing a great challenge so I'm really thinking about this because it's super challenging because once when you know the challenge when you know the solution is obvious so the rule now that we try is what to do should be very obvious the how can be very hard but never a child in front of a challenge you should you should ask yourself I have no idea what to do for example uh in past years I had a web app challenge that was like one function called search it was the very only input in the page so anyone trying to have a web page would say gee I have one input I wonder which one should it be right so the idea is for it to make it as simple as possible on the what to do and under how there were people having zero days in cryptography like somehow were very very difficult some challenges were pretty insane thank you all right I would like you to step aside all right for the next uh yeah shift left I guess from our perspective I would like to invite David Goulet on the stage or as we affectionately call him so David is part of the infrastructure team now he's a nortsec contributor since inception former VP of the CTF uh oh yeah yeah yeah exactly that's the thing so before nortsec hack us existed it's a different beast but it had it shared a lot of uh of common values and uh people uh so David was a founder I think of the hack us um okay but that's a tangent um is okay my first question I mean I didn't do a bio I'm sorry okay is it true that you are the one who introduced the scooters uh yeah I think at some point I was really sick of walking uh over and over again and yeah we we did uh I did ask for for the for the small scooters and actually Kevin which was the our you know with us at that time went to buy three of them and they were 40 pound maximum scooter a very small green and we still have one working 10 years later so why are they necessary like for people who don't build nortsec oh well we we I mean we've seen the size of these rooms we go over and over again and we oh we forgot the cutter we forgot the we forgot this this gizmo blah blah blah we run wires not only that but uh and these years the late the the last couple years uh almost everything is wi-fi but before that we had to run you know hundreds of wires uh and so the scooters became you know instrumental in our livelihood right so it's not just for to roam around looking cool right it's not that the hacker movie style longboard and shit uh no not at all not at all has a practical side to it it is true story um so can you talk about hack us like what what is it for people who don't know so the long story short uh 12 30 it was in 2009 uh we uh a couple of friends of mine uh I don't think they're in the room right now but we went to this uh contest it was uh created by the crime the sound the research the crystal there you go sound the research informatics of the montreal uh and they created this contest and the entire point of this contest I think was to bring people from different universities in college and uh exposed them to cybersecurity and at the end there were this bunch of private corporation trying to hire us and uh I think that was the whole idea so we had some fun but not really so we ended up talking on the way back because we were studying at sherbrook an inverse of sherbrook which is an hour and uh almost two hours from here and uh we did like maybe we could do something better than this maybe we could have like more fun introduce some new concept existing concept like the acro jeopardy that you know it's been existing in defcon for a long time and ccc uh and so we were three at that time and we started um discussing few couple weeks later we created hack us which is hack us is very clever thing you know hack us you get it okay but also hack universe of sherbrook so uh us uh you and us and um we started this and the first edition was true in 2010 with 67 people and I was actually in the cafeteria of the the university and actually it was pretty fun it was indeed I remember we were playing with amish security back then having lots of fun so what motivates you to keep giving a considerable amount of time to know it's like after 11 years plus hack us plus the one year off what it means like it's 15 years career contributing not for as a player but as a giver a builder of worlds for others uh I mean it's pretty fun a lot of people came before here that says that you know they meet people and then it just actually it's very true we meet a lot of people it's extremely fun the logistics of it but there's one thing that is uh very very cool for me is and I used to do that I don't do that anymore but it was the challenge designer and through eight months of work you do this challenge it comes to one it becomes the thing that you polish that you love that you ate and everything and and for two days people go at it and that is the most amazing experience ever because then you have so much fun looking at people sweating enjoying and and so that is a big driver for me and for a lot of challenge designers so I encourage everyone here that wants to challenge to actually join the team but apart from that why am I still doing this and not the challenge uh I think it's just fun yeah it's fun excellent so it's time now to bring the last guest the OG president Gabrielle Tremblay come up on stage please all right Gabrielle you were president I didn't realize just how long your tenure was before I wrote this it's PTSD every time we speak you were president from 2013 to 2018 um to maintain and create an event like this is a great accomplishment so what is north sex original idea idea it's not an idea but it's cool because David is there um short story long which is usually the inverse for for the people we were in there back in the days we used to love doing cyber security but we felt a bit alone so we had this competition in Sherbrooke called hack us where we would go we would compete and we would really succeeded it just to say and then the people that were competing with us we were building team and the team that was organizing back then the the the competition we decided to get together and build a team to do international cdf to go and do the defcon course to go do ictf and mo a lot of us met during these days um but there was it's funny because we had a we had a good success back then but we had a problem we had difficulties recruiting new people because back then the amount of elite level security pentester was kind of limited and um there was this this competition in Sherbrooke that kind of it did this time it was in Sherbrooke they had problem financing themselves is it yeah it's fine that's what happened um so we were looking for ways to recruit more people and we thought like that will be nice if we build something in Montreal like just a cdf just to train people in what we were doing it's it's as stupid as that so we decided to build a competition that would be as hard as the shit we were doing uh on the international scene which was like nose bleeding level back then um so slowly but surely we built that competition that was i remember one of the the first goal we gave ourselves was you shouldn't be able to do all the flags like it should be so hard that if you managed to do a flag or two you had a good competition um so it was it was the basic idea because it was out of a need that was the need um so i went to the old people that were doing hack us which stopped back then and i said why don't we move it to montreal why don't we rename it why don't we rebuild the way it's done why don't we just change the way it's financed and this is how literally north sex came to be as a way to become better at what we were doing yeah great answer and so from there how did you decide to grow from a cdf to a week long event spanning training conference and cdf what i called today north sex festival so uh fun fact being poor when you make an event sucks like you have to make everything happen and the first year we had five k as a budget that was that was the budget of the five k yeah that was five five thousand dollars uh but the room was free at ets so so fair enough that was solved um so we ran at least i think one year and i i i told to myself and the guys i said and it was only guys back then sorry about that we weren't that good um so i said if we want to grow we need money and what's the best way to make money let's do a conference because and that was it that the idea was that the conference probably will be able to fund the cdf and believe it worked like we were recruited pierre david back then and we told him like you want to make a conference because we need money he said like i can do that and that's how it came to be and a couple of years later we had we were here now it was amazing but we were still really poor like we weren't able to do like crazy stuff like like you see these days so we went to see i think it was you and we told you we need money you want to do trainings and that's all training came to be out of necessity so training were meant to fund the conference so the conference could fund the competition yeah that worked but that's pretty much how it came to be so this is how the even grow out of necessity to fund the cdf so you talk yeah go ahead clap clap clap so i have like i guess so we're short on time it's my fault um but i have two angles here we can take so you'll pick the one you like we talked you talked about money already so i don't know how much you want to talk about budgeting but i know that we had in the early days we had some pretty guerrilla meeting about budget and it's like cut this for that or more of that less of that and we were like arguing over stuff we cut or not um do you want to talk about that or do you want to get into the anecdotes uh no i'm gonna i'm gonna go about budget all right no anecdotes all right anecdotes anecdotes all right so uh i have like um i mean i have stuff that will just reveal the whole anecdote you want to you want to talk about that yes but don't name anyone go ahead all right so so the fun thing is you look at this event and you look at all the people they're sitting there they look responsible because they've aged quite a lot since we've started but the beginning were kind of rough like think about like the people that angst in ctf have too much beer fight in bars like these these peoples so we're building an event and i would i would say like because it's kind of going and i don't know but uh it's it's brilliant people do shitty shits like this is what they do um so we did the first uh first year of nortsak and we get to the hotel first night we had an epic acrogypardy like i think that would be illegal today because that's how things were back then and not to do any apology but it was we didn't do what we were doing so it and then the some ways and and then we get a call from the hotel the next morning if someone stole all our ap's wireless access point someone stole stole all of the wireless antennas in the hotel and they're like we have them on camera so either they bring them back or we're gonna call the cops and we're like oh i know who is this we know we were there like the night would never end so these people were you know it's we were part of it we didn't steal it but it was borrowed the people the people which we we worked with were not it was not as serious as it is today so that was a a good thing i think we returned the ap nobody got arrested that was a good thing i guess these are pictures of the first hacker jeopardy in uh in ets so first hacker jeopardy of nortsak but we had prior hacker jeopardy's in uh at hack us this initiative was started at hack us we're always on the old pictures because it was like 10 of us that's why you want another one or ah yeah sure did i throw this away already nope uh yeah that one i don't know about that one you can say in french there are 15 oh yes i think i think this one is better okay go ahead and it's the last one okay no i won't go i go i will go with budgeting now anyways um ask me a different question oh ask me i threw it on the floor oh i know i know i'll do um let's go i could i could propose one which would be a good except for what you're going to answer your own question yes okay go ahead as olivier was about to ask me we were having this discussion in my head um he said what's the hardest thing about leaving nortsak because it's gonna happen to everyone sitting on that stage at some point some people here did some people are planning to it and it's interesting because i had to leave nortsak at some point like like somebody mentioned babies at some point it was edgy that's cool but uh but yeah um one of the thing i i found the hardest thing was to leave nortsak because it's what i did and as i i remember um it's you are never as good in fact the organization is always as good as you are so at some point you become the limitation of the event so the event you're seeing today could have couldn't have existed with me because i i headed up to a point where what i knew as a president as as the guy that ran the show wasn't i i sucked with diversity i i wasn't good in many things so i found people that were better than me to take away but even today after i left i don't know 2018 even today i want to come back i want to help and i always tell myself nana don't don't it's like a children i never had children you help at the bar what i am drink i am uh i'm just a clerk that's what i do but yeah at some point when you create something like nortsak if you want it to become as good as that you need to let it go you need to let it grow on it on itself you need to give it everything it can but you you already gave enough so that's a thank you for asking olivia so that if you ever create something like that let it go at some point it was a great question i know you're so good all right so we're we're running late uh food is already uh over there martin do you want to come up and talk about the logistics so we're gonna turn into party mode if you have like we cannot we cannot summarize 11 years in in just one hour and then 15 minutes so uh if you want to talk about what happened you've seen 12 people 13 including me tonight go and talk to them they would love to talk about nortsak and answer your questions so with that martin talk to us about the party all right bonsoir tout le monde moi je voulais le micro agape pour avoir la même voie que lui sur le système de son ça ça marche pas pareil écoutez on va avoir besoin d'à peu près 40 minutes peut-être une heure pour changer la salle ici faut enlever toutes les chaises tant c'est ça sur le côté je voulais vous demander votre coup de main mais j'ai réalisé que toutes les chaises sont terrapées ensemble donc il va falloir aller il va falloir chercher des exactos mais ceux qui veulent rester encore cinq minutes pour aider à faire des piles de cinq pis c'est ça sur le côté je vais aller chercher ce qu'il faut uh aller manger on a à peu près une heure avant qu'il y a une première partie un dj qui va faire une petite montée en puissance puis après ça on va avoir un show one more time vers 20h30 peut-être 20h45 le dépendement de de comment savoir avec le planning fait que le traiter est arrivé la nourriture est servie on veut pas nourriture dans la salle ici mais on vous a mis plus de table cocktail de l'autre côté donc puis une petite nouveauté aussi bien il y a des il y a un menu de cocktail au bar donc n'hésitez pas à essayer ce qu'on a préparé pour vous donc bonne fin de soirée tout le monde puis j'ai l'air de vous voir au spectacle merci