 Welcome back, everyone. So here's the time for the Q&A and panel session for the Maurer block of the Nordsec conference. First please welcome Suhera, she's a senior security researcher at CrowdStrike. She will be doing a workshop on static malware analysis starting at 4 o'clock today, I think just a few minutes after the panel and it will be continued tomorrow, so it's a two day workshop, probably super interesting. And we have our presenters, everyone is here. So I will be asking questions from the audience. It's not too late, if you do have questions I do have the, your question will pop up here on, I have my laptop open in front of me, so if you still have questions or follow up questions, feel free to send them to Slido and I will be able to ask them. First of all, thank you all for your very good presentations and I have a few very good questions from the audience. So I'm going to start with the ones about air gap network. This is the subject that I think we got the most question about. So I have a few questions where people were asking about, you know, all the proof of concepts that was presented at various places where, you know, you can extract data with light or with LEDs or with sound or with any kind of other methods. So you presented about USB, do you think that they are really proof of concepts or do you think that they really apply to the, in the real world? So great question because one of the reasons also behind our research was to see how much of these techniques were in the wild and the answer is really like not as, so they're not in the wild as far as public knowledge is concerned, so it's not impossible that attacks happen and they were either never detected or never reported on publicly. That being said, those techniques don't execute out of thin air, there still needs to be a payload that will implement whatever like using speakers and microphones or things like that. So they're still malicious code that will implement those techniques and that code can be detected. So it's not, it's, those are cool techniques for sure, but before, if you manage an air gap network, before you implement a Faraday cage around everything, make sure you tackle the USB stick problem first properly and that you manage software updates and then you can focus on that kind of stuff. And do you think that the lack of us seeing it, seeing it in the wild is due to the fact that we don't have the telemetry we don't, we couldn't actually collect them? It's a factor, but then again, what we detect in terms of like the malware that we know that uses USB, we don't necessarily detect the use of USB as a covert mechanism itself. We will detect the files like the malware that will write to USB drives or that will gain persistence on the system or that will iterate on the file system and find files to copy. So the physical communication mechanism that jumps the air gap is just one tiny part of the whole attack. And even though it's, yeah, of course it's hard to detect, like lights flashing at a certain speed, it's hard to detect, but there's a plenty of other things that can be detected. So I think that it's not necessarily because of lack of being able to detect that, that we don't see it in the wild. I have a follow-up question, but I think everyone can answer, will give their opinion, it's about how, you know, it's always more sexy to talk about the, how we break stuff, you know, how we were actually able to compromise something or talk about potential attacks. But it seems that we get less media attention when it comes to new ways around protecting against those attacks. Do you think as an industry we should focus more on the novel defense techniques rather than always, like, spending so much effort into, like, Red Team or, you know, trying to break the, the, the security of the different systems? So I've got an opinion on that, but let's hear you first then. Of course it makes better headlines saying this is how I broke that instead of this is how I defended against the hypothetical attack, but the problem when you show how to break something without showing how to defend against it, like, who is, who will benefit from that information? Who benefits from a technique to break something that cannot be defended yet? So I think it's the responsible thing is at least to give some idea, sometimes it cannot be fixed easily, but at least give some, some ideas of how to mitigate, how to detect. Otherwise you're just giving hints for attackers to, to break more stuff. Hello. Okay. So I agree with that. And I also like to add that it's a question of risk assessment as well. So, you know, attackers and I mean, even including ourselves that are not attackers, we're still kind of lazy, right? So you will try to find that low hanging fruit and that's kind of what you're going to start by going after. And so yes, all these flashing new techniques are very great and stuff. I don't think they have to be overlooked. I think they're worth talking about, but we have to not forget that things like defense and death and making sure that you have all your basics covered are also really worth it as well. So I've had a question for you regarding your presentation, Leon. So, do you think it would be possible to further help the compilation to just, to see an equivalent of the JavaScript that was actually used as source for the, for producing the V8 snapshots? Yes, certainly. And that's a great question. So what we saw was a okay result and we were able to work with it, but it was not ideal. And I do believe that we could make that a lot better. Especially regarding all these properties. So when we're loading a library and you do dot something, dot something, dot something, it creates like tens and lines of codes and it's really hard to read. So I do believe that the next steps for that specific tool will be to improve the result of the decompilation and I do think that it's possible. Yeah. Thank you. I have a question for Ashir. So regarding the Canary Tokens, so what do you think so when people make defensive tools, when they provide some kind of services that can be used as Canary Token, is there any mitigations that they can use to avoid misuse by bad actors? Yeah, so Canary Tokens is one of the tools and services that fall into the gray area. They can be used by legitimate parties such as red teamers, but they can also be used by bad adversaries to serve their purposes as well. I think it's very important for service providers of these services to proactively figure out if their platform is being misused. And also it is the responsibility of the community, researchers like us, so that we can provide the right kind of threat intelligence that prompts people and organizations to block certain events that are being used or certain artifacts that are actually misusing them. So you have to have a proactive model where they can do takedowns and they can proactively go out and find misuse of their platforms. This is for the service providers. And then you have to couple that with the right kind of threat intelligence so that other organizations can also protect themselves until a vendor or a service provider takes action. And I have another question. It's also a problem because those canary tokens generate logs on someone's server and do you think that can be used to further identify victims and perhaps help remediation? Correct. So using third-party services like canary tokens that would leave logs which can be recovered and tracked, sometimes even by the service providers, they can be used in any form. The problem here is that canary tokens are not usually, something like canary tokens is not usually blocked, is a part of blocklists. So it becomes a problem for various remediation teams to figure out whether this is actually red team activity or this is an actual adversary in your networks that's actually doing some activity of this model. All right. So thank you. We have some more general question as well that everyone can answer. So there was a question about if we are seeing, if there's an increase in trends in malware trying to steal or generate cryptocurrencies on non-traditional targets. Is this something you've seen? So I'm not the expert at ESET in that field. What's super common is IoT botnets deploying miners there for some reason because those are not really powerful machines. What makes the most money I guess is stealing rather than mining crypto and that's going on quite intensely right now. Yeah, mining requires I think a huge botnet rather than stealing where you just have a single target that you must compromise. So malware research, you know, we've been doing, well the field of malware research has been there for about 30 years now. It's quite long. Recently we've seen a new kind of job title but helping in this research called threat researchers where there's people with no or a little knowledge about reverse engineering trying to identify malware campaigns and so on. So do you think that there's a risk that attacks get wrongly attributed or that there's missing technical information when... So do you think that there's a trend that we are perhaps not giving the public the factual or scientific output? For sure, I think the first part is that attribution is hard. So there's many, many threats and some of them use common TTPs, right, tactics procedures and it is hard sometimes to distinguish them between themselves but I do think it's kind of like a best effort and because there's so much industry collaboration in the field I find it makes it so much easier for researchers and so it ultimately helps the public have the best information. It's not perfect but it's a best effort I find. I think the truth is in the code so whenever malware is involved you can't fully understand what's going on without reverse engineering thoroughly the samples. We actually had a case last week I think where some people put up a blog post saying they found similarity between an industry or two and another malware family and they used the code similarity analysis techniques and indeed there were some parts of the code in industry two that was pretty much identical to parts of the code in that other malware family which I forgot the name of. So they were like, oh why is this going on? Is there a connection with that? The problem is that one of our reverse engineers spotted that the code similarity was in standard libraries that are probably present in 75% of C code. So yes there was similarity but the person doing the analysis didn't understand what they were looking at so that led to some very, very wrong attribution and we actually called it out I think nicely but and they updated the blog post saying further research showed that so it's not a real connection but so it's important to pay attention to those two reversing because as I said the truth is in the code I believe. Hi there, okay. I guess I wanna add to that to Leon and Alex that would make valid points about code and that's where a lot of the pudding is but also I've seen code that would trick analysts into thinking that, oh this attribution, I see some interesting strings like it may look like it comes from a particular group and this is where I think the original question was there are individuals out there that do data analysis, they try to look at this, they try to make I guess with whatever existing data they're looking at to make attribution, they are also, those individuals also are important in a controversially speaking manner but I think the key is when you have reversers and analysts work together and that's also kind of where I come from in my field of work and I have to always, I create a lot of write up on what I've reversed but also I work with analysts to see what they have on their end and it's a lot of discussion from their own and I think it's like that, it's a hard thing attribution but it's a never ending process. Thank you. I have another question. So we've seen it during the invasion of Ukraine but also in previous conflict but I think it's never been as clear during the past few months in Ukraine that malware is used in the context of war. Do you think that malware research as a field as an impact outside of the technical sphere? So we talk a lot about our analysis but do you think that our work actually as repercussion outside of the technical crowd? Yes. But to what extent? Of course it has some repercussion. How much it's hard to tell. You know, if you document a campaign or an attack that was never discovered before, you're gonna expose tactics, you're gonna expose assets used by the attackers. Some attackers will immediately pivot to something totally different. Some attackers will keep using the same C2s, the same malware just tweak it a little bit. In those cases, did the publication really have an impact? Maybe not or it helped people defend better, I guess. So it's a trick question. I think we'll never know how much impact our publications will have because those impacted our governments and they won't call us and say, hey, screw you, you messed up our attack or something like that. Anyone else? No, okay. I'd like to hear Ashir's take on that one. So I believe that the intelligence that we generate leads to protection mechanisms. Existing, enhancement of existing mechanisms as well as creating new mechanisms. And when we stop an attack at the right time, whether it's for critical services, utility, or for a hospital, we don't know, we don't really, in certain instances, we don't really know how much of the attack we actually stopped and how much of the damage we actually prevented. And that's a conundrum because if you wanna stop an attack as soon as possible, but if we do that, sometimes we don't know the scope of the entire operation. And so it's very important, the work that we do, even if we publish disclosures where the attackers are not forced to change their TTPs, I'm very sure that in one form or the other, we are protecting some of our customers or the general public as a whole. We have two interesting questions from the audience. The first one is regarding the trust of the security vendors. So the security products installed. How do you address the issue of trusting that vendor and the possibility that it may be compromised or oversight by their respective government? That's a political question, to be honest. Yes. Not really a technical question. I'll just say that, honestly, this is out of my field of qualifications and I'm not qualified for. Sorry, Captain Kangaroo. The other one was, yes, the other one was whether we've seen an increase in mobile malware versus desktop malware or... Versus what? Desktop malware, regular stuff we see. I've got another one. He said it's for the noobs, but I don't think it's, I think it's a good question, is about how attribution works. How do we decide to publish an assessment on that this attack was performed by a particular group? And if we have thresholds for thinking we have enough evidence or clues that it's a difficult question, actually. I can try. Yeah, sure, sure. So I guess it's how do you assess? Yeah. It's a hard one and every time I have to write an Intel report on a piece of malware I am constantly reviewed and critiqued on my assessment. But a lot of the criteria is, I guess it's easier for those as reverse engineers because you look at code and that's a bit more factual and I can, especially if you see a particular technique doing something and then it can, then it's a high probability, a high confidence that that proves that thing. But you can, but when it comes to answering questions like do you know or can you foresee how this piece of malware or this attacker will plan the next move? That's difficult and that's not an assessment that I would like to write and I try to avoid that. So it depends on the individual. So for an I do an assessment it's very technical and that's how I like to keep it. I agree with that and I'd also like to add that sometimes we do get lucky. So the opposite of the threat actors is not always on point and it happens where we do get lucky and we kind of gain extra insights into who these people are, who these people are or how are they doing things and why? And it's purely out of luck. So this does happen. And there's no one recipe to attribute an attack to a threat actor if there's one, well we're not aware of it. And it's about comparing different indicators. Some are really strong or stronger, some are weaker but they still matter. There are technical indicators, for example the reuse of an IP address that is not like a shared hosting provider for example. That's a fairly good indicator. It's not foolproof but it's a good indicator. The malware itself, have we seen that malware before? Is that malware was attributed previously by the industry as being operated by a single entity that is labeled as APT something. And then you've got the targeting, you've got other software TTPs, use of spearfishing or whatever like that are all put together, you can make an assessment that everything points to a campaign that was launched by this known actor. But again it's an assessment, it's rarely a 100% determination. We want to add anything this year? Yeah, so I agree with everyone on the panel, attribution is hard. And it tends to be very subjective at times. It depends on the amount of information you have, the output of your reversing exercises as your institutional knowledge and looking at open source intelligence and code similarities and stuff like that. What I also want to highlight here is that, it's okay if you can't attribute stuff. It's, that's not a problem. We can convey doubt in an effective manner and that's completely okay and justified. I'd rather convey doubt more effectively rather than come up with the wrong attribution any day of the week. All right, thank you very much. And thanks to everyone for your presentation.