 Alrighty, hey everybody, welcome to our panel. I'm here with Igor Kozlov who gave us a great talk on how they use data science, specifically machine learning at Bell Canada to do detection engineering. We have Machu Sonier, I think I got that right this time, who just finished his talk, which was full circle detection. And then we have a bonus guest, Carlos, AKA Plug, who is a threat hunting team lead at the Paranoids. And it has some really great contributions on the community end of detection engineering and detection response in general. So guys, I appreciate your time and hopefully we get to talk about some things. I know we have some community questions that we wanna go into, but first I wanted to kind of set the stage based off the talk that we just heard. Machu, how important is it? So I kind of in the intro talked about how one of the big differences between kind of like the red side and the blue side is that the blue side is very much a team sport, right? So like you don't just get to, if you're trying to do this thing all by yourself, you're gonna have a hard time, right? So we need to, and that could be both internally and externally. So internally we need to work with our partners who are on other teams, maybe they're on the infrastructure team, maybe they're the incident responders when we're building detections or community wise, sharing that information. And I'll kind of get everybody's opinion on this, but how important is it to kind of share or work as a team to try to bolster our ability to perform detection and response? Yeah, I think it's one of the key thing in this thing and that's the whole thing that I was talking about at the end of the talk, right? I think that the red teamers share a lot, they do blogs, they share their tools, they share their TDPs and everything. And the defender, there's always this kind of almost false sense that everything is intellectual property, that if we share, we're gonna expose our defenses to the attackers. But as I said in this talk, and I said it again, but lots of the things that we're bailing detection for are actually already known. So there's nothing, it's not an intellectual property to look at LSAS or to make sure that we get notification when LSAS is touched or in this case, when those files are created. So yeah, and that's for the whole community and inside. It's a bit harder when you're a team of one and that's when I think you need to rely more on the community. And that's why I think it's so important to share with the community. If you're lucky enough to be in a team that has many people that specialize in different things, some such as threat hunting for Carlos or machine learning like Igor or threat hunting or whatever, building detection or extremely strong on as you mentioned on the data collection on building pipelines. If you share all of those things that you're building as a whole team and that the smaller player can help, I think the whole community benefits if we're gonna see less breach, we're gonna see better protected company and it all goes together. It's one of the main reason why I like to do talks is to try to share the knowledge that I was lucky enough to gain through all of the other people that I admire. Awesome, yeah, great. So one of the questions that just came in that I think is kind of moves through that kind of line of questioning pretty well is the idea that attackers are quickly coming up or very rapidly coming up with new attack techniques. How do, like what's a good way? I think it's important kind of implied in the question. It's important to have a goal for your detection efforts. So like you always wanna have a target that you're trying to achieve. That way you can validate whether or not you achieved it, right? So like how do you judge it? So does anybody have good ideas on how you can actually prioritize what you're looking for? So like for instance Igor looked for credential abuse or Matthew you were looking for outlook abuse. How do you choose what to look at? Is it just, hey I saw something cool and I think I should go after it or is there actually like a process for determining that? I'll go to Igor first, we'll say. Yeah, thank you. So yeah, I just wanted to go first because I also wanted to comment on something that you said before my presentation and also which was exactly related to what Matthew was now talking about. So I think what you mentioned just to remind everyone is that you said that analysts lack context often times that the scientists say, oh, this is anomalous and then analysts look into it and like why? So and I think this exactly relates to what Matthew was mentioning and also to his idea that people working together always stronger and I just want to say that I worked with Matthew personally and I know that when he was trying to continue that belt. So he said when he was building a team who is extremely successful, we have a lot of superstars there. So he said, I want people very different. I don't want the same type of people. So and here I would like to again to circle back to what you said. So why analysts don't know have lack of context? It's because other scientists never work with analysts and from my point of view, this is the hugest problem. So people coming together with very different backgrounds. So this is when we have the power. So now going back to what you said. So priorities are usually set in corporate environment by other people but just my personal comment would be so I was mentioning credential attack in the presentation as an example. I was also saying that it can be extended to lateral movement and other TTPs. So credential attack is just one that everybody knows about kind of it's easy to sell. And so indeed, not everything can be equally easy attacked with, for example, machine learning techniques that are out there box for you. For that reason, whenever somebody tells me let's use machine learning here, I say, well, can I please look at the data first and talk to people who have a lot of experience in this field and then come up with what will be the timeline for the solution? And sometimes managers would agree that, okay, so indeed it seems like a simple problem but solution would be very complex so we don't have data or we don't have experts. So I would say we indeed prioritize according to the enterprise. So they have their decision metrics. They involve a lot of smart people like experts in the fields to say what is priority, right? We have a lot of industry research on that but also when it comes to machine learning, I think it's important to understand that data is the key in the sense garbage in garbage out. If you don't have good data, it will not work. Thanks. I want you to go ahead. Yeah, so this is an interesting topic in particular because I hear it often, how do I get started? What do I do? And I think it's important to start with the notion that there are different business and business sizes and organization sizes. So it would start with that, right? So taking into account that because you wanna start simple regardless of the size, the key is that you find a way to scope things and you really wanna narrow scopes and there are multiple ways to do that. Either you use the intelligence, you take the intelligence approach, what are the APT's actors that are after all is, you can look at vulnerabilities. What operating systems do we have and what are we vulnerable to? You can look at a project where you design what your crown jewels are, what are the systems that I'm worried about? Maybe you're worried about insider thread. So there are multiple ways. You have to pick one and try it. And if it doesn't work, go to the next one. That's probably one of the most important things that wanna make sure people know. There isn't, we're here but it's not like we, there's a solution that applies to everyone. You have to try a few things and you'll find how things work out but you have to iterate. You have to try the methodical and that's the other thing. You have to find a way where you can develop a process and a process that you can repeat so you can improve it over time. And as you go, you will start seeing the benefits. So to recap, there are many ways to do it. Find one that works for your organization, the size and the type of work that you do. Develop some methodology or borrow one of the many that exist and some of those have provided and then try it. Make sure you incorporate some learning reviews and then you iterate and you will find how you're gonna prioritize things. If in doubt, there's plenty of forums out there where you can come back and chat with many of us and we will have to dive in further but that will be kind of like the TLDR on this idea. Yeah, I often tell people when people ask me that question, first of all, it's like something is better than nothing. So don't like get to where you're paralyzed not being able to make a decision and like use some intuition. If you're just getting started, think about like what are the things that I hear about the most and like just go for that, right? Otherwise there's like the threat intel approach. There's like an entity-based approach like for instance, we worked with an airline at one point that decided to start building detections around their loyalty system because they knew that there was some interest in hacking their loyalty system. So they decided to focus on that. Maybe active directory is something you wanna look after. Maybe there's like a tactic-based approach to where you say, hey, like we know that persistence by definition is persistent and going to be there for a long period of time. And so maybe that's a good place for us. We don't have great collection capability right now, but we know persistence is something that's always gonna be there and it's meant to last for a long time. So let's focus there first or maybe we only have collection on domain controllers. And so we wanna look for active directory type attacks first because we know that we have the telemetry to do that. Carlos, one thing that you kind of touched on was this idea of continuous improvement, having a process. One of the things that we like to talk about at my company is the idea that we want to differentiate between when we're successful because we have a single person which is like individual competence. You have a really smart person who does a great job versus an organization that is organizationally competent meaning that you can replace individuals and or you could hire new people or a person could leave and that like the new person's gonna come in and plug right into the process and know what they're doing because there's documentation, there's relationships between organizations and expectations and things like that. How important match you is that in the type of things, the type of concepts that you were talking about with that detection life cycle of being able to provide that IR playbook and that validation, those types of things. So just before I go to that, I just wanna add two little things about the previous question if you don't mind. So I think that having metrics to show how you improve over time is extremely important as well to show your management that you started at point A and that maybe in Q2 you are at point B and then at point C by the time of Q4 and that you're improving. I think this is very important. I made also another talk about that called the SOC counter attack that is based on using the attack framework from MITRE as a starting point to build your detection program. And the other point is when you ask where to start, I think that if you have a strong security solution, you might want to know where your current security solution is actually failing. If you have a very strong EDR but it's weak on the network side, maybe you wanna focus on network-based attack and the other way around if you have very strong IPS system and very strong firewalling and zoning but a bit weaker on the endpoint, you might want to focus there. So it really depends I think on what you have and it will depend on for each company. So testing some new techniques, they're testing some known technique for me it's extremely important like a kind of a purple team-ish type of approach. I think it's very important. Now to your other question about replacing people in a SOC and having one very competent person versus a full team and I think it's extremely important to teach your new people because a SOC is typically somewhere where people will turn very fast. You'll have new hires, people will get promoted the better ones, they won't stay in that analyst role for a very long time. So it's very important to have something in place that makes those people great, fast and make sure that everybody is at the same level. So you don't wanna have a rock star and I don't like that term but like one analyst that it's very, very good whereas your other one are average, you want all your tickets to be in the same way every time of the day, every time of the year. You don't wanna go during Christmas and think if this guy is on shift and this type of attack happens, it's doomsday, right? You want that everyone has the same level of understanding and that's why I think that the training part is something that's extremely important to bring your analysts to the next level that will bring the whole SOC to the whole level and that will bring the whole corporation also to the next level in their security maturity. Yeah, I think one thing you reminded me of is it's probably okay to have a superstar, right? But like as a leader or as somebody that's running the program, like your effort and maybe even some of their metrics for job performance should be related to how do they give back to the organization and make the organization better as opposed to kind of remit, like whenever you ask like as a consultant we'll go work with a company and you'll say how does this work and the person or like where is this documented and the person will point to their head and you're like, okay, well that's bad, right? Because that's like the bus theory where if that person gets hit by a bus you're in big trouble. One thing that there's two questions that I think are related and I think everybody probably has an opinion on this. The first question is what data sets, I'll give them both and let everybody kind of give a quick answer on for both of them. What data, the first question is what data set is always gathered but is ultimately mostly useless? And then the kind of like corollary to that is what data is rarely gathered but would be very useful if it was. Maybe I'll go to Igor first. Sure, well, I can start with a funny story, right? Because when I just started, Mathieu said, so what should we do first? And showed me a metro attack and I was like, dumpster diver is the easiest one because we can just set a camera, we can just detect people very easily with machine learning and he was very surprised because for him it was something strange, like why would it be an interesting source at all? And in terms of useless data, I am the guy who likes data. So for that reason, for me, there is no useless data. I think what is useless is if I look at the data and I don't understand it and then I go to somebody who knows what it is about and they can provide insights, this is that becomes very interesting. So I think not having people who are experts about certain data around is what I don't like. And if this also relates to the superstars, I totally agree with everyone that it's important for everyone to be able to pull. I just want to say that superstars I like on teams because they inspire with their passion, everyone. And really when you see a person who is burning with the idea, it's really something that other people also grow and shoot for the stars. So yeah, thanks. Go ahead, Mathieu. Well, useless, it always depends. If you want to build detection or if you're talking about forensic. This is like a religious question, by the way, basically. Yeah, I know, I know. But I like polarized debates. I think firewall is slightly overrated. It's hard to build meaningful detection and I would say actionable detection based on firewall and the other end, when you have an incident, definitely, definitely, definitely, you do need those firewall logs. So it's a double-edged sword, right? You don't want to not have them, but they are extremely chatty. So it's a difficult question. And the one source that we should have that a lot of people don't have is probably Sysmon because it's such a rich data source and lots of people, and I have seen that in many companies where I worked or where I was consulting, is that they say, well, we have an EDR. It already collect all of this or the equivalent. That may be true, but most of the time you cannot do everything you want and you cannot control if you want a very specific event type or a very specific value because there's a new attack, there's a new vector. You don't have any control. So when you deploy Sysmon, you might have it very toned down, if you will, and not duplicate what's collected by your EDR solution or any other solution, but at least get the things that your EDR is not collecting and then you can build meaningful detection based on that. So this is actually a very interesting question and I'll go back with that. It's about the size of the organization and what you do. In general, I have this idea that data is only useless when it doesn't have or provide any context and it's our job to find the context of the data. But even then, there is a lot of data that just gets accumulated. If you're not gonna use that data, then it brings absolutely no value to you, even for forensic purposes. So you need to count to the fact that that becomes problematic. If you have a lot of data that brings a lot of noise or you have an study, you wanna think about it. Is it time for me to bring that data in for me to use or not? Otherwise, sometimes it makes things even more difficult. I come from an organization where it gets terabytes of data and so sorting and looking through that is very difficult and it becomes dense, right? So I think it's important to be mindful of that. And one way in which you can figure it out whether data has value to you is to actually do something that it connects to the previous questions. You really wanna develop or implement a visibility gap assessment. It's really not difficult and people make it very difficult. Just map all of the applications that provide some telemetry, right? And then identify which of them will bring you value for whatever is that you're going after. And then prioritize that onboard it, look at it, investigate it. What makes data much more valuable? And this is where I think you get this weird notion that some data is useless is you need to enrich the data. If the data is enriched, it becomes a bit more powerful, but when you consume it raw without that enrichment of context, you make the process much more difficult. So I will say that that is very important too. As far as what data is readily available, I will say in the context of what I do, which is threat hunting or incident response, most of the faults that I know even on threat hunting, they just heavily concentrate on the host side and you really need to have network telemetry. If you don't have network telemetry, you might not, you're not gonna find many things. And that network telemetry is key beyond the host. You really wanna know what's happening on the wire. And I will say that that will be one of the best things any organization should concentrate on. If you don't have it, if you're good at host, start spending time on the network because it could be the flip that tells you. In fact, sometimes the network telemetry gives you the fact that someone is doing a lot of activity that otherwise you didn't notice on the host level. Yeah, so to add on to that, there's so many things going through my head right now, but to add on to what you're saying about the network is you could also do correlation, right? So for instance, let's say, so attackers use, I've been really interested in service creation recently. So like one of the reasons why that's interesting is because it's frequently used for lateral movement or privilege escalation. However, it's also something that happens a ton, right? On any network. And so like one of the things that you do is you say, okay, how do I know when a service has been created? Like I wanna know every single time a service is created or I wanna be able to identify every single time a service is created regardless of how it's created, right? So you find that like, I kind of call it the base condition of service creation. But then you also wanna figure out features or contextual factors that will basically, you can analyze, right? And so like one of the things that you might be interested in because services are used for lateral movement is was this service created over the network? Well, yeah, the best way to get that is by collecting network data, right? So like you could get Sysmon even ID three which gives you network information, but it doesn't have that granular context of like, this was an RPC request and this is the, you know, RPC protocol that was used and that type of thing, which is really valuable for us to evaluate. And so I think that gives you a better picture of what's actually happening. Yeah, and if you mind actually to sound the network because it's a topic that I hear often is there are attacks that you actually need the network data to identify them. Beacon is a perfect example, right? If you don't have data about a browser extension that has been deployed that has been used for Beacon because it's using DOH, then the DNS data over the network or some other information will be valuable. You also get JJ3, J3S hashes and other type of hashes. So it provides not only context, it allows you to connect, but many times it will be the bullet that will help you identify that there's more, you know, going on on the network. So definitely my hit is network data, it's a must. Yeah, for sure. Cool. Okay, so one thing that that kind of reminds me of is we are really focused on, we have this world where we're really focused on reducing false positives. And I think that's kind of what gets us going towards the machine learning kind of path is how do we reduce false positives in a smart way? But then one of the worries, not necessarily with machine learning, but in false positive reduction is the introduction in false negatives. And like the problem that I see with that is that false positives are apparent, meaning that you have an alert that tells you that you have to explicitly say as a false positive, but false negatives are transparent, right? And so like one of my, I guess there's a couple of thoughts about that. One thought is how do you manage the reduction of false positives? And Igor, I think you probably have a really well thought out idea of this. How do you manage reduction of false positives? Well, also managing, you know, your rate of false negatives. And then the second is, how much should we be like, we're not building detections in a vacuum. So like, you're not just building one detection saying, this is what I'm putting, I'm putting all my eggs in this basket and it has to work. You're creating a kind of series of detections. And so like this one may have a false negative because of X, but you know, people don't just perform one attack technique. There's a whole chain of attack techniques. And so like how important is it to be, you know, worried about false, like the threat of false negatives? Thank you very much. I think it's a great, great question specifically because it also highlights one thing. So what people usually know about machine learning is what they learn from popular sources. And I see this a lot. And so when you ask this question and you're talking about false positives and machine learning, but false negatives doesn't sound the bell of machine learning in a sense. It's because this is the source, not the textbooks and the professors who've been thinking about it for 70 years. So why I mention it? Because so there are very different metrics. So when we're talking about false positives, precision, and when we're talking about false negatives, that's recall. And there are a bunch of other metrics that look into both of them as a harmonic mean, for example, precision of the recall and this F1 metric. And a lot of other stuff. And there is a lot of research about it, which also says that you should also not only look into false positive, false negative, you look into everything over all four parts because otherwise you can be misguided easily. And what I want to tell often to people who have power in decision making is that when they're talking, let's reduce false positives and say, okay, let's turn off all the others and we will have no false positives and we will have very good accuracy because most of the times we receive false positives and they don't understand it. So let me just please clarify this idea in terms of poker because in my organization poker is a big thing. So the idea is that if you play only when you have top four aces, you will be always winning. But will it make you a successful player? No, it will not because you have to play with the hand that you have and win with the hand that you have. And so for that reason, it's absolutely paramount not to focus on just one metric. So one metric by itself. So there is again, a lot of studies in psychology that shows when people's organization starts following one metric, first they start affecting this metric just to look good and it never is a good thing. So here I would say when we're talking about false positives, we should be always talking about false negatives at the same time, at least. And so we should be saying at this rate of false positives, this is the rest of false negatives, do we accept this? And so once again, there is a lot of research so let's not deep dive into it. But what I want to tell you is that there is only one way to improve both at the same time is to build better detections. So once you build better detections, you can reduce both false positives and false negatives. And sorry for the plug. So in my presentation I specifically show that if you just use one type of detection and supervised machine learning and rules, you will not be seen what you will be seen with addition of supervised machine learning on top. And so this was just one of the examples. Regardless of what you're using, you need to include more and more ideas into the thought process. And as you are saying, so you need to bring a lot of a lot of dimensions into the game and you also need to prioritize. This is what the other speakers say and absolutely agree with this. So machine learning just could help you to prioritize a bit better if you have a lot of data. But if you don't have data, go to the experts, security experts is your data. They've seen it all, they know how to help you. So to summarize my answer, so look into the problem having multiple dimensions in the sense that bring all type of experts into the game to discuss and a must is security experts, if you don't have data, they will tell you what to prioritize. They have intuitive understanding of situation. Just make sure to include also people who know a bit about data because sometimes they can't help when, because we don't know what we don't know. So bring as much as possible of what you don't know. Bring as much as people that whose background you don't know and they might help. Thanks. Cool. Anybody else have any opinions on that or should we move to the next question? I have some, definitely. But I wanna see if Matthew has some before. I think we might all have opinions. I'll give you mine in a moment. Matthew, you wanna go first when we go to question? Go, go, man. So this is actually a very interesting question because you're gonna encounter as an analyst, as an incident responder, your presumptions, right? And even when you are seasoned with quotes, those might come back to bite you. And so you might encounter things that definitely look good, that there's nothing wrong, but they turn out to be bad, right? You get those things. So I think the first thing is to address that, be mindful of the presumptions, right? But then challenge them. I have this notion for me of what trait hunting is. And the way that I define trait hunting is just a methodology that allows me to proactively look for the unknown unknowns. And I use this concept of basically that the unknown unknowns, right? So if you challenge the presumptions, you're gonna find a way to uncover some of the false negatives. It is not easy, right? And it takes time and iteration. But one of the ways that I find that you can, it can help you and even actually helps you with the notion of improved detections is unfortunately with one of the side effects that I perceive happen when MITRE came up, right? The attack framework is a lot of, a lot of folks have zoomed in too much into the atomic type of detections, right? One detection that will help you detect this variant of Mimicats or something else. So I have this concept of molecular detections. These are detections that are more broad. They try to cover more grounds and they're prone to be false positive, right? But what you do is you compliment with the atomic detections. So you eventually end up with what I call high confidence detections. It's not easy and it takes some time but I really encourage people to think molecular. Go one step up and then start looking at the data. And that forces you to do one thing that reduces in my opinion, false negatives to understand your data. What happens to your network is unique to your network. And that allows you to determine is this really bad or it isn't. And how you're gonna go about it to challenge that, to determine whether there is one or the other. Hopefully that makes sense. I hope it did. Yeah, I like to, I usually talk about detections as being precise or broad and there's a spectrum to where you're on either end. Like precise would be a file hash would be very precise. Basically no false positives, like no false or potentially tons of false negatives depending on what your goal is. So if your goal is to detect Mimicats and you're looking for a single hash they could change Mimicats by changing a byte. Now they've bypassed that, but you're gonna have no false positives. And then broad is like, I very broad would be like, I wanna detect lateral movement. Well, you're gonna have a ton of false positives, maybe less false negatives. Now like the question is, where does something get labeled a false negative, right? Because like you're gonna detect it and maybe alert on it, but somebody's going to mark that as a, close that ticket because they're just inundated with tons of alerts. All right, Matthew. Yeah, so that's one of the things I wanted to say for that subject and specific. And I think I brushed the subject in my talk, but is that not all detection that you're gonna build are ticket worthy. So if you go broader, you might want to have a dashboard or a weekly report that you're gonna produce on that output and someone can look at it manually, maybe once a week or maybe once a day depending on how verbose it is, but you don't need, your detection don't always need to create a ticket and it don't need always need to be handled by your stock analysts. They can be handled by other people. I worked for an organization in the past where they had people that were not security specialists, but they looked at report daily and they kind of became the anomaly detection by themselves. They knew that every day you would have, and I'm just throwing numbers here, but let's say 10,000 firewall block and 100,000 allow. And if these margin change by a significant amount, they just raise the flag to another, to an analyst and say, look into this. This kind of deviates from my manual trend and they were making these trends in Excel at that time. I was a few years back, of course, but it's still, I think now, that's a little bit maybe what Igor is doing with machine learning and now these things exist, but for teams that don't have that, it's still possible to have people that are maybe not security specialists and do this type of work once. They are even a security specialist to review things and understand what is normal. We often talk about when you need to know what is normal in your network and this is one very good way to learn what is normal. You make those dashboards, you look at your different security solution and you look at the ins and outs or the block and allow and you can quickly make trends for that. So, yeah. And if I may add, there's also a term that my family's team that works on a lot of the kind of measures we deploy uses is the risk-based, RBA, risk-based analytics. The idea is very simple as there are events, many events that trigger things, but there are events that require context and depending on the risk and the score that you have, they become notables. They allow analysts to jump in and say, this is meaningful, I wanna go and look into it. It takes time and this is, again, iteration. You really wanna continue to go into that cycle to is this working or is this not? And I think one of the things we haven't talked about but I wanna kind of mention briefly is I found that for me to be successful in threat hunting, there is obviously the team but there are other people that are in the trenches, you know, the SOC analysts know a lot and the engineer is his admins and they can provide a lot of feedback into what's normal or not normal, what could be a false positive or not. So, find a way to communicate with them, to interview, to have meetings, to get them involved into this detection cycle that you build because they will bring a lot of value and they will be your partner. So, it's worth to account for them too. Yeah, so unfortunately we're running out of time here but I wanted to close up on just one little point. So, we were talking about ways that you could handle these types of situations. And so, Dr. Anton Chewbaccan, who I'm sure a lot of you are familiar with from Twitter, but I think he works at Chronicle these days, he has a blog post called On Threat Detection Uncertainty, is what it's called, On Threat Detection Uncertainty. And he says, like uncertainty is like, how do you know that you're doing a good job at this, right? And he says there's three things that you can do to kind of deal with this false positive problem. One is improve alert triage, which is one of the things that Igor touched on in his talk, which is provide the analysts with the things that they need. Also, Matthew, you talked about this with the playbooks. Provide them with the things that they need to efficiently and quickly and accurately triage alerts, right? Another one is use multi-stage detection, which is also something that Igor touched on with the unsupervised and then supervised machine learning aspect, right? So, like you can have multiple steps to your detection process. It doesn't have to become an alert immediately. You can step through. And then lastly is split bad from interesting, which is what you just kind of described, Matthew, which is you have things that you know are bad and you have things that you know are interesting, but you don't know for sure that they're bad. And maybe you have different pipelines that those go down and you treat them differently and you have different resources allocated to them. So that's kind of, I thought that that was a very informative blog post. And I think you guys kind of both touched on those concepts during your talks and I thought it was interesting. So yeah, unfortunately, that's the end of our time. Hopefully we can continue this talk at some point in the future. I really was enjoying it. It's always a shame how short of a timeframe you have. We could just talk for hours, but thank you again to Norsak for having us on for the detection engineering section of the program. And also thank you to everybody. Igor Kozlov, Matthew Sonier, and Carlos, AKA Plug. So appreciate it guys and hope to talk to you soon and continue these conversations. Just one thing, today is Igor last day as a non-dad. So I want to raise my glass, Igor, to that. And I really hope that everything will go fine for you and that you will enjoy being a dad. Thank you so much. I appreciate it. Have a look, Norsak, cap in front of me. Love. Awesome. Cheers, congratulations. Thank you so much. Awesome. So back to the organizers. Thanks guys. Thank you. Thank you.